Jan 21 14:33:56 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 14:33:56 crc restorecon[4755]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 14:33:56 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:33:57 crc restorecon[4755]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 14:33:58 crc kubenswrapper[4902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:33:58 crc kubenswrapper[4902]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 14:33:58 crc kubenswrapper[4902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:33:58 crc kubenswrapper[4902]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:33:58 crc kubenswrapper[4902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 14:33:58 crc kubenswrapper[4902]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.120379 4902 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.124974 4902 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.124996 4902 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125002 4902 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125007 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125013 4902 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125019 4902 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125024 4902 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125030 4902 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125035 4902 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125044 4902 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125049 4902 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125074 4902 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125080 4902 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125087 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125092 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125097 4902 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125101 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125117 4902 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125122 4902 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125127 4902 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125132 4902 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125137 4902 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125141 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125146 4902 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125151 4902 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125155 4902 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125160 4902 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125165 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125169 4902 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125174 4902 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125179 4902 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125184 4902 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125189 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125193 4902 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125198 4902 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125203 4902 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125210 4902 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125216 4902 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125222 4902 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125228 4902 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125233 4902 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125240 4902 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125245 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125250 4902 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125254 4902 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125259 4902 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125264 4902 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125269 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125273 4902 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125278 4902 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125283 4902 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125287 4902 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125292 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125297 4902 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125302 4902 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125306 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125311 4902 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125316 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125321 4902 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125326 4902 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125330 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125335 4902 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125340 4902 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125345 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125352 4902 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125358 4902 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125363 4902 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125368 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125374 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125379 4902 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.125384 4902 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125736 4902 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125753 4902 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125764 4902 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125772 4902 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125779 4902 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125785 4902 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125793 4902 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125800 4902 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125806 4902 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125811 4902 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125817 4902 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125823 4902 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125829 4902 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125835 4902 flags.go:64] FLAG: --cgroup-root="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125840 4902 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125846 4902 flags.go:64] FLAG: --client-ca-file="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125852 4902 flags.go:64] FLAG: --cloud-config="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125857 4902 flags.go:64] FLAG: --cloud-provider="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125863 4902 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125873 4902 flags.go:64] FLAG: --cluster-domain="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125879 4902 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125884 4902 flags.go:64] FLAG: --config-dir="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125890 4902 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125896 4902 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125903 4902 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125909 4902 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125915 4902 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125921 4902 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125927 4902 flags.go:64] FLAG: --contention-profiling="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125932 4902 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125938 4902 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125944 4902 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125949 4902 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125957 4902 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125964 4902 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125969 4902 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125975 4902 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125983 4902 flags.go:64] FLAG: --enable-server="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125989 4902 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.125996 4902 flags.go:64] FLAG: --event-burst="100" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126003 4902 flags.go:64] FLAG: --event-qps="50" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126008 4902 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126014 4902 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126019 4902 flags.go:64] FLAG: --eviction-hard="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126027 4902 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126032 4902 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126038 4902 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126043 4902 flags.go:64] FLAG: --eviction-soft="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126072 4902 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126078 4902 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126084 4902 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126090 4902 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126096 4902 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126101 4902 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126107 4902 flags.go:64] FLAG: --feature-gates="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126114 4902 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126120 4902 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126125 4902 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126131 4902 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126137 4902 flags.go:64] FLAG: --healthz-port="10248" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126143 4902 flags.go:64] FLAG: --help="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126149 4902 flags.go:64] FLAG: --hostname-override="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126154 4902 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126160 4902 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126165 4902 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126171 4902 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126177 4902 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126183 4902 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126188 4902 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126194 4902 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126199 4902 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126205 4902 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126211 4902 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126217 4902 flags.go:64] FLAG: --kube-reserved="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126223 4902 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126228 4902 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126234 4902 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126240 4902 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126246 4902 flags.go:64] FLAG: --lock-file="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126251 4902 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126257 4902 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126262 4902 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126271 4902 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126277 4902 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126282 4902 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126288 4902 flags.go:64] FLAG: --logging-format="text" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126293 4902 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126299 4902 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126305 4902 flags.go:64] FLAG: --manifest-url="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126310 4902 flags.go:64] FLAG: --manifest-url-header="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126317 4902 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126323 4902 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126330 4902 flags.go:64] FLAG: --max-pods="110" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126336 4902 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126341 4902 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126347 4902 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126352 4902 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126358 4902 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126364 4902 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126370 4902 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126383 4902 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126388 4902 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126394 4902 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126400 4902 flags.go:64] FLAG: --pod-cidr="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126405 4902 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126414 4902 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126420 4902 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126425 4902 flags.go:64] FLAG: --pods-per-core="0" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126431 4902 flags.go:64] FLAG: --port="10250" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126439 4902 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126445 4902 flags.go:64] FLAG: --provider-id="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126451 4902 flags.go:64] FLAG: --qos-reserved="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126457 4902 flags.go:64] FLAG: --read-only-port="10255" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126462 4902 flags.go:64] FLAG: --register-node="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126469 4902 flags.go:64] FLAG: --register-schedulable="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126477 4902 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126489 4902 flags.go:64] FLAG: --registry-burst="10" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126498 4902 flags.go:64] FLAG: --registry-qps="5" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126505 4902 flags.go:64] FLAG: --reserved-cpus="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126512 4902 flags.go:64] FLAG: --reserved-memory="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126521 4902 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126528 4902 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126534 4902 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126540 4902 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126546 4902 flags.go:64] FLAG: --runonce="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126551 4902 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126557 4902 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126563 4902 flags.go:64] FLAG: --seccomp-default="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126569 4902 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126575 4902 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126581 4902 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126587 4902 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126592 4902 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126598 4902 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126604 4902 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126609 4902 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126615 4902 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126620 4902 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126626 4902 flags.go:64] FLAG: --system-cgroups="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126632 4902 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126640 4902 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126646 4902 flags.go:64] FLAG: --tls-cert-file="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126651 4902 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126659 4902 flags.go:64] FLAG: --tls-min-version="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126665 4902 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126671 4902 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126676 4902 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126682 4902 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126687 4902 flags.go:64] FLAG: --v="2" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126694 4902 flags.go:64] FLAG: --version="false" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126702 4902 flags.go:64] FLAG: --vmodule="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126708 4902 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.126714 4902 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126851 4902 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126859 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126865 4902 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126870 4902 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126875 4902 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126880 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126885 4902 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126890 4902 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126902 4902 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126907 4902 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126915 4902 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126920 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126925 4902 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126931 4902 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126935 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126940 4902 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126945 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126950 4902 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126954 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126959 4902 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126964 4902 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126969 4902 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126974 4902 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126979 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126983 4902 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126988 4902 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126993 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.126998 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127004 4902 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127008 4902 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127013 4902 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127018 4902 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127023 4902 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127028 4902 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127032 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127041 4902 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127047 4902 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127052 4902 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127072 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127076 4902 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127083 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127088 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127096 4902 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127100 4902 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127105 4902 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127110 4902 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127115 4902 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127120 4902 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127125 4902 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127130 4902 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127136 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127140 4902 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127147 4902 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127153 4902 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127160 4902 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127166 4902 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127172 4902 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127177 4902 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127182 4902 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127187 4902 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127192 4902 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127197 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127201 4902 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127206 4902 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127212 4902 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127217 4902 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127221 4902 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127226 4902 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127231 4902 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127236 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.127241 4902 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.127256 4902 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.135941 4902 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.135981 4902 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136138 4902 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136152 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136164 4902 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136175 4902 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136186 4902 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136197 4902 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136206 4902 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136215 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136224 4902 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136232 4902 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136240 4902 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136248 4902 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136258 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136266 4902 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136275 4902 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136283 4902 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136292 4902 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136300 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136309 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136319 4902 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136328 4902 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136336 4902 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136345 4902 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136353 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136362 4902 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136370 4902 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136379 4902 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136387 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136396 4902 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136404 4902 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136412 4902 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136422 4902 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136431 4902 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136439 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136448 4902 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136461 4902 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136474 4902 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136486 4902 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136497 4902 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136508 4902 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136517 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136526 4902 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136535 4902 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136545 4902 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136554 4902 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136563 4902 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136572 4902 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136581 4902 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136590 4902 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136602 4902 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136613 4902 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136623 4902 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136634 4902 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136645 4902 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136655 4902 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136666 4902 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136675 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136684 4902 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136694 4902 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136703 4902 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136713 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136722 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136731 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136739 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136748 4902 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136757 4902 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136765 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136773 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136781 4902 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136789 4902 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.136798 4902 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.136812 4902 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137044 4902 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137082 4902 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137093 4902 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137103 4902 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137113 4902 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137122 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137131 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137140 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137150 4902 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137159 4902 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137167 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137175 4902 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137183 4902 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137192 4902 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137200 4902 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137209 4902 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137217 4902 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137226 4902 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137235 4902 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137249 4902 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137260 4902 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137270 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137280 4902 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137290 4902 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137299 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137308 4902 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137316 4902 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137325 4902 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137334 4902 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137343 4902 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137352 4902 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137362 4902 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137373 4902 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137381 4902 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137390 4902 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137398 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137406 4902 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137414 4902 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137422 4902 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137431 4902 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137440 4902 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137448 4902 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137456 4902 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137464 4902 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137473 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137482 4902 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137490 4902 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137501 4902 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137512 4902 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137521 4902 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137531 4902 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137540 4902 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137549 4902 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137560 4902 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137570 4902 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137580 4902 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137589 4902 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137599 4902 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137607 4902 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137616 4902 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137624 4902 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137635 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137644 4902 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137652 4902 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137662 4902 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137670 4902 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137683 4902 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137739 4902 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137748 4902 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137756 4902 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.137764 4902 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.137778 4902 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.138403 4902 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.143514 4902 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.143659 4902 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.144595 4902 server.go:997] "Starting client certificate rotation" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.144650 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.144861 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-26 21:25:44.564171781 +0000 UTC Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.144978 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.152260 4902 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.154262 4902 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.155822 4902 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.164191 4902 log.go:25] "Validated CRI v1 runtime API" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.193624 4902 log.go:25] "Validated CRI v1 image API" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.195618 4902 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.198623 4902 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-14-28-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.198655 4902 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.217622 4902 manager.go:217] Machine: {Timestamp:2026-01-21 14:33:58.215647042 +0000 UTC m=+0.292480091 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7 BootID:9c9a3794-1c52-4324-901d-b93cdd3e411b Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:38:48:26 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:38:48:26 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8d:3c:be Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:53:aa:fb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7c:49:70 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:98:89:10 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ae:70:99 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:ac:e2:b8:c1:29 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:d1:b4:8e:05:9f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.217914 4902 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.218153 4902 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.218703 4902 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.218930 4902 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.218970 4902 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.219355 4902 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.219368 4902 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.219525 4902 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.219557 4902 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.219886 4902 state_mem.go:36] "Initialized new in-memory state store" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.219980 4902 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.221326 4902 kubelet.go:418] "Attempting to sync node with API server" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.221353 4902 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.221370 4902 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.221385 4902 kubelet.go:324] "Adding apiserver pod source" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.221406 4902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.223339 4902 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.226344 4902 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.227892 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.228050 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.228257 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.228311 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.228777 4902 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.229872 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.229932 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.229956 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.229975 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230007 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230027 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230089 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230132 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230157 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230178 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230206 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230227 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.230949 4902 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.231871 4902 server.go:1280] "Started kubelet" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.232289 4902 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.232399 4902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.232406 4902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 14:33:58 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.234231 4902 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.234022 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc59e82541459 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:33:58.231815257 +0000 UTC m=+0.308648326,LastTimestamp:2026-01-21 14:33:58.231815257 +0000 UTC m=+0.308648326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.234687 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.234718 4902 server.go:460] "Adding debug handlers to kubelet server" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.234729 4902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.234856 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:14:57.788281618 +0000 UTC Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.235014 4902 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.235028 4902 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.235219 4902 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.235851 4902 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.236200 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="200ms" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.236695 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.236774 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.240973 4902 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.241005 4902 factory.go:55] Registering systemd factory Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.241015 4902 factory.go:221] Registration of the systemd container factory successfully Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.241370 4902 factory.go:153] Registering CRI-O factory Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.241390 4902 factory.go:221] Registration of the crio container factory successfully Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.241412 4902 factory.go:103] Registering Raw factory Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.241429 4902 manager.go:1196] Started watching for new ooms in manager Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.242716 4902 manager.go:319] Starting recovery of all containers Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256820 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256896 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256915 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256933 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256958 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256974 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.256990 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257008 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257028 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257048 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257083 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257101 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257116 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257135 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257151 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257168 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257183 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257215 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257230 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257248 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257270 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257285 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257303 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257320 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257336 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257354 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257374 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257390 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257406 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257424 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257441 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257460 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257478 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257492 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257553 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257568 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257607 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257641 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257657 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257673 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257688 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257702 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257727 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257751 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257765 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257781 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257798 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257814 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257831 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257856 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257871 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257887 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257911 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257930 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257951 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257971 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.257989 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258007 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258025 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258045 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258080 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258101 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258119 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258138 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258155 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258171 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258187 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258203 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258219 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258236 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258253 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258268 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258285 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258323 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258342 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258358 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258374 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258389 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258404 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258421 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258437 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258450 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258465 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258483 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258498 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258516 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258535 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258550 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258565 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258580 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258597 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258614 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258631 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258646 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.258662 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259425 4902 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259460 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259483 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259501 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259522 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259540 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259559 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259578 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259598 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259618 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259689 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259716 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259741 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259762 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259782 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259803 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259824 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259842 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259861 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259881 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259899 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259919 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259935 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259956 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259979 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.259995 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260013 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260032 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260093 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260119 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260138 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260156 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260177 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260197 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260215 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260233 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260252 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260272 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260291 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260310 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260337 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260355 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260376 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260393 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260411 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260431 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260453 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260473 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260498 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260520 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260541 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260562 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260582 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260603 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260622 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260641 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260659 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260677 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260697 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260715 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260734 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260752 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260771 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260789 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260859 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260882 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260901 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260922 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260940 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260957 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260974 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.260995 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261012 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261032 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261077 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261098 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261115 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261135 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261153 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261169 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261189 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261207 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261224 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261243 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261263 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261281 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261301 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261320 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261338 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261355 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261372 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261403 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261421 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261442 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261459 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261478 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261498 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261514 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261531 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261550 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261568 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261585 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261602 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261618 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261634 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261652 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261679 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261694 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261719 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261738 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261756 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261771 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261799 4902 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261824 4902 reconstruct.go:97] "Volume reconstruction finished" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.261835 4902 reconciler.go:26] "Reconciler: start to sync state" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.271139 4902 manager.go:324] Recovery completed Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.284195 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.286360 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.286397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.286410 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.289941 4902 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.289969 4902 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.290005 4902 state_mem.go:36] "Initialized new in-memory state store" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.291193 4902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.293445 4902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.293537 4902 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.293620 4902 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.293736 4902 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.295674 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.295749 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.299971 4902 policy_none.go:49] "None policy: Start" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.301213 4902 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.301251 4902 state_mem.go:35] "Initializing new in-memory state store" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.335976 4902 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.342400 4902 manager.go:334] "Starting Device Plugin manager" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.342484 4902 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.342508 4902 server.go:79] "Starting device plugin registration server" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.343018 4902 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.343047 4902 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.343237 4902 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.343362 4902 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.343379 4902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.353875 4902 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.394642 4902 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.394746 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.395905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.395942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.395953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396115 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396406 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396469 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396801 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396829 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.396937 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.397158 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.397227 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.397962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398045 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398129 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398349 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.398769 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.399966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.400017 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.400043 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.400358 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.401005 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.401515 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.402042 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.402282 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.402343 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.402800 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.402854 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.404715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.404753 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.404769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.405175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.405385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.405535 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.437348 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="400ms" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.443658 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.444843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.444877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.444888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.444908 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.445331 4902 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.463700 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.463744 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.463773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.463852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.463947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.463984 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464195 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464265 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.464322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565856 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565904 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565981 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.565978 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566002 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566092 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566115 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566115 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566120 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566187 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566118 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566233 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566315 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566260 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566288 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566400 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566266 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.566473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.645806 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.647404 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.647475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.647489 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.647525 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.648276 4902 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.734329 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.742918 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.767342 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.777811 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c5c68826b68ff4b6425091aa85def5c0f5ca110883fd47d3f917d6d0f07ced22 WatchSource:0}: Error finding container c5c68826b68ff4b6425091aa85def5c0f5ca110883fd47d3f917d6d0f07ced22: Status 404 returned error can't find the container with id c5c68826b68ff4b6425091aa85def5c0f5ca110883fd47d3f917d6d0f07ced22 Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.784730 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7f2e011b6c2edc84084ad71c1af86d161d0b491842f9d583da3ae9193ea3cd24 WatchSource:0}: Error finding container 7f2e011b6c2edc84084ad71c1af86d161d0b491842f9d583da3ae9193ea3cd24: Status 404 returned error can't find the container with id 7f2e011b6c2edc84084ad71c1af86d161d0b491842f9d583da3ae9193ea3cd24 Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.793510 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.801350 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0e550d89347ff0986c5b417808279985785a4d03b3aa148b02907a9a348f6e38 WatchSource:0}: Error finding container 0e550d89347ff0986c5b417808279985785a4d03b3aa148b02907a9a348f6e38: Status 404 returned error can't find the container with id 0e550d89347ff0986c5b417808279985785a4d03b3aa148b02907a9a348f6e38 Jan 21 14:33:58 crc kubenswrapper[4902]: I0121 14:33:58.802146 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.814479 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ea08208103286d8c701b3037c67f9fbf692f3648e465b6e97f2d6cd91e73ded7 WatchSource:0}: Error finding container ea08208103286d8c701b3037c67f9fbf692f3648e465b6e97f2d6cd91e73ded7: Status 404 returned error can't find the container with id ea08208103286d8c701b3037c67f9fbf692f3648e465b6e97f2d6cd91e73ded7 Jan 21 14:33:58 crc kubenswrapper[4902]: W0121 14:33:58.816166 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8e9f47ee4fc1a68aa6442696bb5027025fe9a5cd2e9d593e38738fb9b65f0140 WatchSource:0}: Error finding container 8e9f47ee4fc1a68aa6442696bb5027025fe9a5cd2e9d593e38738fb9b65f0140: Status 404 returned error can't find the container with id 8e9f47ee4fc1a68aa6442696bb5027025fe9a5cd2e9d593e38738fb9b65f0140 Jan 21 14:33:58 crc kubenswrapper[4902]: E0121 14:33:58.838735 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="800ms" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.049596 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.051721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.051773 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.051787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.051824 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.052371 4902 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Jan 21 14:33:59 crc kubenswrapper[4902]: W0121 14:33:59.162338 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.162448 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.233007 4902 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.234999 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:32:59.392427927 +0000 UTC Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.300010 4902 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f" exitCode=0 Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.300101 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.300191 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7f2e011b6c2edc84084ad71c1af86d161d0b491842f9d583da3ae9193ea3cd24"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.300266 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.301345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.301372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.301383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.301557 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.301584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e9f47ee4fc1a68aa6442696bb5027025fe9a5cd2e9d593e38738fb9b65f0140"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.302672 4902 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0" exitCode=0 Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.302793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.302844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ea08208103286d8c701b3037c67f9fbf692f3648e465b6e97f2d6cd91e73ded7"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.302976 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304299 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304447 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" exitCode=0 Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e550d89347ff0986c5b417808279985785a4d03b3aa148b02907a9a348f6e38"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.304657 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.305411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.305429 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.305437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.306237 4902 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0" exitCode=0 Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.306259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.306302 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c5c68826b68ff4b6425091aa85def5c0f5ca110883fd47d3f917d6d0f07ced22"} Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.306383 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.307943 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.308295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.308322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.308331 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.308908 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.308938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.308948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: W0121 14:33:59.566453 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.566537 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:59 crc kubenswrapper[4902]: W0121 14:33:59.575319 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.575412 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.640117 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="1.6s" Jan 21 14:33:59 crc kubenswrapper[4902]: W0121 14:33:59.775236 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.775846 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.853352 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.855777 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.855826 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.855838 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:33:59 crc kubenswrapper[4902]: I0121 14:33:59.855867 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:33:59 crc kubenswrapper[4902]: E0121 14:33:59.856441 4902 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.21:6443: connect: connection refused" node="crc" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.164296 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 14:34:00 crc kubenswrapper[4902]: E0121 14:34:00.166345 4902 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.21:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.233221 4902 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.21:6443: connect: connection refused Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.236186 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:37:00.340717528 +0000 UTC Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.310670 4902 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53" exitCode=0 Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.310731 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.310845 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.311926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.311948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.311956 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.315469 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.315590 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.316585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.316604 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.316613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.317767 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.317787 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.317805 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.317903 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.319118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.319165 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.319178 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.324625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.324657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.324667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.324840 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.327141 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.327171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.327179 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.330742 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.330769 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2"} Jan 21 14:34:00 crc kubenswrapper[4902]: I0121 14:34:00.330779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551"} Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.236430 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:18:36.482652369 +0000 UTC Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.345871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02"} Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.345960 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.345976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c"} Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.347433 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.347492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.347514 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.349782 4902 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8" exitCode=0 Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.349863 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8"} Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.349960 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.350107 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.350176 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.350213 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351082 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351144 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351170 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351534 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.351897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.456908 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.458197 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.458256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.458270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:01 crc kubenswrapper[4902]: I0121 14:34:01.458306 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.237493 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:04:29.162009844 +0000 UTC Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.357156 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360"} Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.357216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c"} Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.357235 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b"} Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.357244 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2"} Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.357252 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.357310 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.359179 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.359245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:02 crc kubenswrapper[4902]: I0121 14:34:02.359258 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.101418 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.186154 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.238705 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:58:54.938932828 +0000 UTC Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.367021 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3"} Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.367077 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.367167 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.367193 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.368160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.368201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.368211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.368565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.368616 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.368627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.482261 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.482537 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.483916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.483949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.483959 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.491040 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:03 crc kubenswrapper[4902]: I0121 14:34:03.578857 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.239711 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:00:10.828310125 +0000 UTC Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.370386 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.370425 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.370498 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.370564 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372394 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372494 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.372420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:04 crc kubenswrapper[4902]: I0121 14:34:04.465571 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 14:34:05 crc kubenswrapper[4902]: I0121 14:34:05.240436 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:28:39.148004054 +0000 UTC Jan 21 14:34:05 crc kubenswrapper[4902]: I0121 14:34:05.372422 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:05 crc kubenswrapper[4902]: I0121 14:34:05.373481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:05 crc kubenswrapper[4902]: I0121 14:34:05.373538 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:05 crc kubenswrapper[4902]: I0121 14:34:05.373552 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:05 crc kubenswrapper[4902]: I0121 14:34:05.811749 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.241366 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:19:22.017026192 +0000 UTC Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.275916 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.374707 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.375771 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.375832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.375924 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.658988 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.659339 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.661308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.661348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:06 crc kubenswrapper[4902]: I0121 14:34:06.661362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.171543 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.171846 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.174284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.174342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.174366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.241634 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:39:50.923526811 +0000 UTC Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.378233 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.379653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.379708 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.379725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.472355 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.472569 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.474232 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.474294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:07 crc kubenswrapper[4902]: I0121 14:34:07.474312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:08 crc kubenswrapper[4902]: I0121 14:34:08.242351 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:09:57.740548753 +0000 UTC Jan 21 14:34:08 crc kubenswrapper[4902]: E0121 14:34:08.354104 4902 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 14:34:09 crc kubenswrapper[4902]: I0121 14:34:09.243228 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:37:41.872108789 +0000 UTC Jan 21 14:34:09 crc kubenswrapper[4902]: I0121 14:34:09.276564 4902 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:34:09 crc kubenswrapper[4902]: I0121 14:34:09.276687 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:34:10 crc kubenswrapper[4902]: I0121 14:34:10.114924 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 14:34:10 crc kubenswrapper[4902]: I0121 14:34:10.115202 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:10 crc kubenswrapper[4902]: I0121 14:34:10.116920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:10 crc kubenswrapper[4902]: I0121 14:34:10.117001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:10 crc kubenswrapper[4902]: I0121 14:34:10.117020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:10 crc kubenswrapper[4902]: I0121 14:34:10.243707 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:10:49.330329812 +0000 UTC Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.235080 4902 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:34:11 crc kubenswrapper[4902]: E0121 14:34:11.241369 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.244580 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:52:40.017137249 +0000 UTC Jan 21 14:34:11 crc kubenswrapper[4902]: W0121 14:34:11.376113 4902 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.376222 4902 trace.go:236] Trace[916932595]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:34:01.374) (total time: 10001ms): Jan 21 14:34:11 crc kubenswrapper[4902]: Trace[916932595]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:34:11.376) Jan 21 14:34:11 crc kubenswrapper[4902]: Trace[916932595]: [10.001323531s] [10.001323531s] END Jan 21 14:34:11 crc kubenswrapper[4902]: E0121 14:34:11.376254 4902 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 14:34:11 crc kubenswrapper[4902]: E0121 14:34:11.459585 4902 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.534817 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.534920 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.540234 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 14:34:11 crc kubenswrapper[4902]: I0121 14:34:11.540271 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 14:34:12 crc kubenswrapper[4902]: I0121 14:34:12.244870 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:57:41.386632691 +0000 UTC Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.111746 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.112434 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.114788 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.114860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.114896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.120169 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.244963 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:34:02.447705208 +0000 UTC Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.397063 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.398707 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.398749 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.398759 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.583979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.584226 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.585686 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.585741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:13 crc kubenswrapper[4902]: I0121 14:34:13.585756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:14 crc kubenswrapper[4902]: I0121 14:34:14.246086 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:19:59.269527288 +0000 UTC Jan 21 14:34:14 crc kubenswrapper[4902]: I0121 14:34:14.659994 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:14 crc kubenswrapper[4902]: I0121 14:34:14.661544 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:14 crc kubenswrapper[4902]: I0121 14:34:14.661626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:14 crc kubenswrapper[4902]: I0121 14:34:14.661639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:14 crc kubenswrapper[4902]: I0121 14:34:14.661677 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:34:14 crc kubenswrapper[4902]: E0121 14:34:14.666359 4902 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 14:34:15 crc kubenswrapper[4902]: I0121 14:34:15.246942 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:23:08.072991374 +0000 UTC Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.247453 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:22:09.215470114 +0000 UTC Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537302 4902 trace.go:236] Trace[2058640533]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:34:01.511) (total time: 15025ms): Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[2058640533]: ---"Objects listed" error: 15025ms (14:34:16.537) Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[2058640533]: [15.025832893s] [15.025832893s] END Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537338 4902 trace.go:236] Trace[562478301]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:34:02.420) (total time: 14116ms): Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[562478301]: ---"Objects listed" error: 14116ms (14:34:16.537) Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[562478301]: [14.116333259s] [14.116333259s] END Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537390 4902 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537343 4902 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537437 4902 trace.go:236] Trace[1341060811]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:34:01.755) (total time: 14782ms): Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[1341060811]: ---"Objects listed" error: 14782ms (14:34:16.537) Jan 21 14:34:16 crc kubenswrapper[4902]: Trace[1341060811]: [14.782383245s] [14.782383245s] END Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.537451 4902 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.538625 4902 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.543847 4902 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.565374 4902 csr.go:261] certificate signing request csr-jfcs6 is approved, waiting to be issued Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.572224 4902 csr.go:257] certificate signing request csr-jfcs6 is issued Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.579691 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49792->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.579759 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49792->192.168.126.11:17697: read: connection reset by peer" Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.580122 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.580181 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 14:34:16 crc kubenswrapper[4902]: I0121 14:34:16.624185 4902 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.172281 4902 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.172356 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.233962 4902 apiserver.go:52] "Watching apiserver" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.237589 4902 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.237973 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238425 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238505 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238566 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.238616 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.238743 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.240120 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.240410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.241940 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.242022 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.242937 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.243036 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.243126 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.244585 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.244652 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.244781 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.245014 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.246309 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.246504 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.248143 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:40:57.724675158 +0000 UTC Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.297182 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.299169 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.301086 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.316260 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.336210 4902 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.336391 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342127 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342177 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342200 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342244 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342268 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342288 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342307 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342327 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342366 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342389 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342430 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342451 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342452 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342508 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342530 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342548 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342566 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342636 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342639 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342657 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342681 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342681 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342721 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342792 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342809 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342836 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342856 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342901 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342924 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342948 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342974 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.342995 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343063 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343030 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343088 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343117 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343139 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343125 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343164 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343192 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343211 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343213 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343263 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343294 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343300 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343343 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343340 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343396 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343442 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343464 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343477 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343488 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343561 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343583 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343624 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343628 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343683 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343726 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343759 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343775 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343781 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343854 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343929 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343968 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343988 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344010 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344030 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344068 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344215 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344244 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344290 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344397 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344484 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344509 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344558 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344610 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344633 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344714 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344770 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344871 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344891 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344913 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344954 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344973 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345010 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345160 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345199 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345236 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345259 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345367 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345390 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345412 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345430 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345477 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345501 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345540 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345584 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345623 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345673 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345715 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345738 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345759 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345799 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345870 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345935 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346018 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346085 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346130 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346150 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346172 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346194 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346215 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346256 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346304 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346330 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346373 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346399 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346423 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346491 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346515 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346538 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346562 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346585 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346605 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346627 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346698 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346720 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346763 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346835 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346882 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346925 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346973 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346995 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347019 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347294 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347327 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347377 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347402 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347448 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347471 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347514 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347535 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347576 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347670 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347694 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347716 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347766 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347790 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347837 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347880 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347954 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347978 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348003 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348064 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348090 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348203 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348255 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348280 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348340 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348422 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348574 4902 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348591 4902 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348605 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348619 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348631 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348643 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348655 4902 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348670 4902 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348684 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348697 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348708 4902 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348720 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348733 4902 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348744 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348775 4902 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349697 4902 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.343867 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344198 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344321 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344472 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344518 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344628 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344683 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.344869 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345016 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345234 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345468 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345787 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.345920 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346092 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346239 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346470 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.346805 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347345 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347717 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347824 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.347980 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348263 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348321 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348466 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.352807 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.352832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348598 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348616 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348673 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348837 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348843 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349014 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349201 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349346 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349380 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349540 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349751 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353155 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.349846 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.849828783 +0000 UTC m=+19.926661812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353404 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353575 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353623 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.353667 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.348516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350097 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350118 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350249 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350284 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350404 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350422 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350531 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350662 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350742 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350814 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.350864 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351003 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351178 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351287 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351346 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351363 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351537 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351703 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351911 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.351953 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.352314 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354093 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354160 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354260 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.349934 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.354988 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.355879 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356340 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356355 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356518 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.356547 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357388 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357738 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.357891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358002 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358324 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358340 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358731 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358804 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.358878 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359091 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359167 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359189 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359761 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359785 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.359902 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.360099 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361384 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361731 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.361755 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362037 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362276 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362344 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362415 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.862392368 +0000 UTC m=+19.939225397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362359 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362519 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362777 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.362828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362890 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.362920 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.862913403 +0000 UTC m=+19.939746432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.363068 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.363546 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.364122 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.364211 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.364492 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365005 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365145 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365595 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365686 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.365956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366175 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366642 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.366689 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367056 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367333 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367482 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367652 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367912 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.367930 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368141 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368351 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368650 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.368906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.369906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370336 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370449 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370568 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370650 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.370797 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.371747 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.371943 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.374268 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374742 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374787 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374780 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374805 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374814 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374819 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374900 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.874879112 +0000 UTC m=+19.951712361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.374924 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:17.874915233 +0000 UTC m=+19.951748492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.374974 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.377033 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.377845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.379440 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.379628 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.379754 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.380302 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.380375 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.380823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.382753 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.383208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.383553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.383596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.384350 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.384540 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385432 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385610 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385660 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.385924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.386963 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.386999 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.386975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.387096 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.387433 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.388959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.390031 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.390619 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.391123 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.396584 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.396966 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.397249 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.397314 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.398773 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.408833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.409868 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.410343 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.413934 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.415439 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02" exitCode=255 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.415648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02"} Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.421935 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.422377 4902 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.429275 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.430082 4902 scope.go:117] "RemoveContainer" containerID="35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.430357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.432732 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.437533 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449552 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449617 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449705 4902 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449716 4902 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449726 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449735 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449744 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449753 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449762 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449770 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449779 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449788 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449796 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449804 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449813 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449821 4902 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449830 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449840 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449849 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449857 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449866 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449876 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449886 4902 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449896 4902 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449887 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449908 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.449904 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450072 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450087 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450104 4902 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450117 4902 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450133 4902 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450147 4902 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450159 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450172 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450181 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450197 4902 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450211 4902 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450224 4902 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450240 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450257 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450274 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450288 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450302 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450315 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450329 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450342 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450354 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450367 4902 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450379 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450393 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450406 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450419 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450433 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450445 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450457 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450469 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450482 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450495 4902 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450509 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450523 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450536 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450549 4902 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450562 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450575 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450587 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450601 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450613 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450625 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450639 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450651 4902 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450663 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450673 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450685 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450697 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450709 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450720 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450732 4902 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450741 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450753 4902 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450764 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450773 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450781 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450789 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450798 4902 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450807 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450815 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450824 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450833 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450842 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450851 4902 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450865 4902 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450886 4902 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450895 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450904 4902 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450913 4902 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450922 4902 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450937 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450949 4902 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450966 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.450984 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451070 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451086 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451098 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451112 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451127 4902 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451141 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451156 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451202 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451216 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451233 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451250 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451263 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451276 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451290 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451303 4902 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451317 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451330 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451342 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451355 4902 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451367 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451388 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451401 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451413 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451427 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451438 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451450 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451464 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451477 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451489 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451500 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451512 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451523 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451537 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451550 4902 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451561 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451571 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451592 4902 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451604 4902 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451615 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451626 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451638 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451649 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451660 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451672 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451683 4902 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451694 4902 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451704 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451715 4902 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451729 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451741 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451753 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451764 4902 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451778 4902 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451791 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451804 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451815 4902 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451829 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451840 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451851 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451863 4902 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451875 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451887 4902 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451898 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451910 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451922 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451933 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451945 4902 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451958 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451970 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451981 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.451992 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452011 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452024 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452035 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452090 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452104 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452116 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452127 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452139 4902 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452152 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452165 4902 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.452177 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.466331 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.477707 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.490183 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.503783 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.515404 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.525285 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.533867 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.547155 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.557157 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.565311 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.574007 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 14:29:16 +0000 UTC, rotation deadline is 2026-11-20 09:14:14.406970723 +0000 UTC Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.574082 4902 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7266h39m56.83289123s for next certificate rotation Jan 21 14:34:17 crc kubenswrapper[4902]: W0121 14:34:17.575639 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85 WatchSource:0}: Error finding container 38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85: Status 404 returned error can't find the container with id 38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85 Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.576596 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:34:17 crc kubenswrapper[4902]: W0121 14:34:17.597977 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad WatchSource:0}: Error finding container 3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad: Status 404 returned error can't find the container with id 3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.859954 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.860155 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.860129073 +0000 UTC m=+20.936962102 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960559 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960603 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960625 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:17 crc kubenswrapper[4902]: I0121 14:34:17.960645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960773 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960790 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960800 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960845 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.960831874 +0000 UTC m=+21.037664903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960885 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960906 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.960900616 +0000 UTC m=+21.037733645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960948 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.960967 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.960961387 +0000 UTC m=+21.037794416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961006 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961014 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961021 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:17 crc kubenswrapper[4902]: E0121 14:34:17.961058 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:18.961033269 +0000 UTC m=+21.037866298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.145976 4902 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146270 4902 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146334 4902 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146336 4902 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146366 4902 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146287 4902 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146286 4902 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.146413 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.129.56.21:49398->38.129.56.21:6443: use of closed network connection" interval="6.4s" Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146411 4902 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146461 4902 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.146483 4902 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.146442 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.129.56.21:49398->38.129.56.21:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cc59ea371daa3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:33:58.787414691 +0000 UTC m=+0.864247720,LastTimestamp:2026-01-21 14:33:58.787414691 +0000 UTC m=+0.864247720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.179228 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-62549"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.179643 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.181526 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.181767 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.182709 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.193594 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.210489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.230663 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.241907 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.249150 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:34:58.734244822 +0000 UTC Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.250910 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.262890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-hosts-file\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.262941 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjnps\" (UniqueName: \"kubernetes.io/projected/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-kube-api-access-qjnps\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.272127 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.284309 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.298842 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.299629 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.300267 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.300855 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.301411 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.301859 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.302422 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.302920 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.303560 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.304064 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.304565 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.305286 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.305775 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.308561 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.308585 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.309066 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.309889 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.310626 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.310975 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.311917 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.312494 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.312908 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.313829 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.314323 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.315334 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.315718 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.316688 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.317330 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.318192 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.318726 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.319263 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.320035 4902 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.320148 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.321158 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.321682 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.322653 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.323026 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.324518 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.325501 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.325967 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.327066 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.327937 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.328925 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.329499 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.330440 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.331059 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.331876 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.332393 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.333249 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.333588 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.333935 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.334748 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.335343 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.336183 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.336677 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.337216 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.338016 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.344703 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjnps\" (UniqueName: \"kubernetes.io/projected/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-kube-api-access-qjnps\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363516 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-hosts-file\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363578 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.363657 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-hosts-file\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.380834 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.398319 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjnps\" (UniqueName: \"kubernetes.io/projected/5f7f4ebe-2b62-4cab-934b-f038b6a05d07-kube-api-access-qjnps\") pod \"node-resolver-62549\" (UID: \"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\") " pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.399734 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.419501 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.421469 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.421752 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.423467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3da1374cc3504eef7f81b6552fe8e38497ff87dd64c88558be24c085e208c0ad"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.425002 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.425031 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.425061 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5ea185bd2307fef20a711a18ba4db2a4ec1d2f999a29a181dd19b00ebc3b6ccc"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.426922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.426954 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"38b087dd777b1643258b98fad4c6bc090fe401764756a6046f1cca9609518a85"} Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.435719 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.459061 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.477493 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.492963 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-62549" Jan 21 14:34:18 crc kubenswrapper[4902]: W0121 14:34:18.506550 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f7f4ebe_2b62_4cab_934b_f038b6a05d07.slice/crio-7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f WatchSource:0}: Error finding container 7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f: Status 404 returned error can't find the container with id 7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.514062 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.554142 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.574638 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.597345 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.613384 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.630307 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.651867 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.670530 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.684221 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.701296 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.867017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.867152 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.867133609 +0000 UTC m=+22.943966638 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.942138 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-h68nf"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.942695 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.943108 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.943895 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-m2bnb"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944234 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mztd6"] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944355 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944545 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944556 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944949 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.945015 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944742 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944781 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.944830 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.952896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.953089 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.953940 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954493 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954676 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954691 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954727 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954692 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954841 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954920 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.954936 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.955064 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-proxy-tls\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967788 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-bin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.967841 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968340 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968392 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968419 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cnibin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-system-cni-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968541 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968563 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968578 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968630 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.968611911 +0000 UTC m=+23.045445140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968547 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968683 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968716 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968735 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-kubelet\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968751 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-system-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-k8s-cni-cncf-io\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968782 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-multus-certs\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968798 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-os-release\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.968843 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.968832137 +0000 UTC m=+23.045665166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968858 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-binary-copy\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rr89\" (UniqueName: \"kubernetes.io/projected/7dbee8a9-6952-46b5-a958-ff8f1847fabd-kube-api-access-9rr89\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968904 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.968988 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969019 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969062 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtf6\" (UniqueName: \"kubernetes.io/projected/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-kube-api-access-4dtf6\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969082 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-multus\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969123 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969131 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969142 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969161 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969147 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969185 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-rootfs\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969219 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-os-release\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969238 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.969226738 +0000 UTC m=+23.046059767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cni-binary-copy\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969278 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969299 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-netns\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969372 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-etc-kubernetes\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969387 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969403 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969440 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-daemon-config\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969458 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7w8\" (UniqueName: \"kubernetes.io/projected/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-kube-api-access-8h7w8\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969473 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969512 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cnibin\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-hostroot\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-conf-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-mcd-auth-proxy-config\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969777 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: E0121 14:34:18.969820 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:20.969806944 +0000 UTC m=+23.046639973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969774 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-socket-dir-parent\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.969847 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.971427 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:18 crc kubenswrapper[4902]: I0121 14:34:18.995355 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.011882 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.031140 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.052919 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.059788 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cnibin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-system-cni-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071392 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071411 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-system-cni-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071461 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071425 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-kubelet\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071496 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-kubelet\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cnibin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071527 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-system-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-k8s-cni-cncf-io\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-multus-certs\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071623 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-os-release\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071632 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-k8s-cni-cncf-io\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-binary-copy\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071667 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rr89\" (UniqueName: \"kubernetes.io/projected/7dbee8a9-6952-46b5-a958-ff8f1847fabd-kube-api-access-9rr89\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-system-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071692 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071847 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtf6\" (UniqueName: \"kubernetes.io/projected/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-kube-api-access-4dtf6\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071873 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-multus-certs\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-multus\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071928 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-multus\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071949 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071997 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072022 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-rootfs\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072060 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-os-release\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072081 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cni-binary-copy\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072120 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-netns\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-etc-kubernetes\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072191 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072217 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072238 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-daemon-config\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7w8\" (UniqueName: \"kubernetes.io/projected/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-kube-api-access-8h7w8\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072282 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072302 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cnibin\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072319 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072383 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-hostroot\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-hostroot\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072420 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072423 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-conf-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072447 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-conf-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072456 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072480 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-mcd-auth-proxy-config\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072542 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-socket-dir-parent\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072562 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072587 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-proxy-tls\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-bin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072657 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-binary-copy\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072684 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072714 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.071873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-os-release\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072755 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072757 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072787 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072797 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072837 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072868 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cnibin\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.072963 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-rootfs\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-os-release\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073071 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbee8a9-6952-46b5-a958-ff8f1847fabd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-var-lib-cni-bin\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-socket-dir-parent\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073584 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073620 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-etc-kubernetes\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073650 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-cni-binary-copy\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-mcd-auth-proxy-config\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.073969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-cni-dir\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.074014 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-host-run-netns\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.074072 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbee8a9-6952-46b5-a958-ff8f1847fabd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.074375 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-multus-daemon-config\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.079889 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.080830 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.081452 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-proxy-tls\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.101290 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.101705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rr89\" (UniqueName: \"kubernetes.io/projected/7dbee8a9-6952-46b5-a958-ff8f1847fabd-kube-api-access-9rr89\") pod \"multus-additional-cni-plugins-h68nf\" (UID: \"7dbee8a9-6952-46b5-a958-ff8f1847fabd\") " pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.117475 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.118676 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.146370 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.196555 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.206283 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtf6\" (UniqueName: \"kubernetes.io/projected/d6c85cc7-ee09-4640-ab22-ce79d086ad7a-kube-api-access-4dtf6\") pod \"machine-config-daemon-m2bnb\" (UID: \"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\") " pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.206333 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"ovnkube-node-8l7jc\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.220062 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.244901 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.249331 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:20:42.093382458 +0000 UTC Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.256454 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h68nf" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.259070 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.271391 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.275782 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.278861 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.287981 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.294840 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.294865 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.294855 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:19 crc kubenswrapper[4902]: E0121 14:34:19.294945 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:19 crc kubenswrapper[4902]: E0121 14:34:19.295034 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:19 crc kubenswrapper[4902]: E0121 14:34:19.295130 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.299235 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.307276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7w8\" (UniqueName: \"kubernetes.io/projected/037b55cf-cb9e-41ce-8b1e-3898f490a4aa-kube-api-access-8h7w8\") pod \"multus-mztd6\" (UID: \"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\") " pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.320230 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.339102 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.365072 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.380294 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.394078 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.408636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.422783 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.429616 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"5ce6899ab2b12b8f4895228356fb88bbef937550a4743b5874ab9aba66a78a98"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.434319 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerStarted","Data":"604704edeaee03e5b7b43758ad962447ff48da321e957b529c8db32a87c93efe"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.436573 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.436718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"5db5d6c210cd289c7ac6d65a204f7254d1bffda346bedb3bbbbf5f06bf748884"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.438067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-62549" event={"ID":"5f7f4ebe-2b62-4cab-934b-f038b6a05d07","Type":"ContainerStarted","Data":"dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.438133 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-62549" event={"ID":"5f7f4ebe-2b62-4cab-934b-f038b6a05d07","Type":"ContainerStarted","Data":"7d3692c1b4b3e544aef2f67122dcee6a64d813c953fe9e6ee3bf1f3b6807919f"} Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.452424 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.466586 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.468701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.482461 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.499707 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.516395 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.529155 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.543713 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.549036 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.550817 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.557609 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.563399 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mztd6" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.568174 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.570900 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.586009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.585878 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.589590 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.599416 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.624938 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:19 crc kubenswrapper[4902]: I0121 14:34:19.638440 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.138887 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.152292 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.155742 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.156790 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.170776 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.182106 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.198122 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.211526 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.225823 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.239530 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.250362 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:04:54.763893733 +0000 UTC Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.252904 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.272753 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.286489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.299612 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.312441 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.324253 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.336030 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.348208 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.360120 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.375019 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.388088 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.400512 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.410845 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.426901 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.443459 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.443514 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.445518 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.445581 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"2640a78ce524443cef8004d901f431b31719521bf07a79a70416e95f2c4391f7"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.448247 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597" exitCode=0 Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.448346 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.450826 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" exitCode=0 Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.450936 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.452980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d"} Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.469770 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.507584 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.551519 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.596825 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.630324 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.676707 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.704268 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.750675 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.784685 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.824364 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.863099 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.891532 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.891675 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.891649991 +0000 UTC m=+26.968483020 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.906022 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.945749 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.984711 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:20 crc kubenswrapper[4902]: I0121 14:34:20.993208 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993282 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993323 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993368 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993366 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993332 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993319009 +0000 UTC m=+27.070152038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993442 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993425042 +0000 UTC m=+27.070258071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993384 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993457 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993490 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993502 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993483053 +0000 UTC m=+27.070316082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993504 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:20 crc kubenswrapper[4902]: E0121 14:34:20.993577 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:24.993557345 +0000 UTC m=+27.070390374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.025898 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.063397 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.066719 4902 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068548 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068562 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.068634 4902 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.129397 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.136153 4902 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.136413 4902 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137599 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.137608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.159371 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162378 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162409 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.162420 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.176907 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180432 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180502 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.180516 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.192471 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195957 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.195981 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.199625 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.209433 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.216330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.227990 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.228130 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.228299 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230271 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.230292 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.250960 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:03:22.52646036 +0000 UTC Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.265945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.294407 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.294551 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.294598 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:21 crc kubenswrapper[4902]: E0121 14:34:21.294656 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332773 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.332786 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434649 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.434732 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460914 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.460950 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.463602 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30" exitCode=0 Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.463788 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.480336 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.495582 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.508453 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.520639 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.532961 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538230 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.538262 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.544809 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.559772 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lg6wz"] Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.560232 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.560604 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.575428 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.595242 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.598526 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f01bb5a-c917-4341-a173-725a85c1f0d2-host\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.598581 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4jk\" (UniqueName: \"kubernetes.io/projected/6f01bb5a-c917-4341-a173-725a85c1f0d2-kube-api-access-gv4jk\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.598622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f01bb5a-c917-4341-a173-725a85c1f0d2-serviceca\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.616228 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.637054 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.640612 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.670640 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699377 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f01bb5a-c917-4341-a173-725a85c1f0d2-host\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv4jk\" (UniqueName: \"kubernetes.io/projected/6f01bb5a-c917-4341-a173-725a85c1f0d2-kube-api-access-gv4jk\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699465 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f01bb5a-c917-4341-a173-725a85c1f0d2-serviceca\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.699624 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f01bb5a-c917-4341-a173-725a85c1f0d2-host\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.700550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6f01bb5a-c917-4341-a173-725a85c1f0d2-serviceca\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.707355 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.734976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv4jk\" (UniqueName: \"kubernetes.io/projected/6f01bb5a-c917-4341-a173-725a85c1f0d2-kube-api-access-gv4jk\") pod \"node-ca-lg6wz\" (UID: \"6f01bb5a-c917-4341-a173-725a85c1f0d2\") " pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743185 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.743195 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.769552 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.808676 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.845995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.846195 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.853230 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.873237 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lg6wz" Jan 21 14:34:21 crc kubenswrapper[4902]: W0121 14:34:21.888866 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f01bb5a_c917_4341_a173_725a85c1f0d2.slice/crio-d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497 WatchSource:0}: Error finding container d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497: Status 404 returned error can't find the container with id d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497 Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.900178 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.925424 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.947983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.948105 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:21Z","lastTransitionTime":"2026-01-21T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:21 crc kubenswrapper[4902]: I0121 14:34:21.966730 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.006327 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.045129 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050237 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050246 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.050268 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.088252 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.134973 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153498 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153558 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153581 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.153635 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.181885 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.204015 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.245969 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.251828 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:39:32.744108303 +0000 UTC Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.255930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.255976 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.255989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.256012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.256024 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.286676 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.325546 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.359314 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.367653 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.404419 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.446001 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.461449 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.468578 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475" exitCode=0 Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.468652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.470274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lg6wz" event={"ID":"6f01bb5a-c917-4341-a173-725a85c1f0d2","Type":"ContainerStarted","Data":"5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.470356 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lg6wz" event={"ID":"6f01bb5a-c917-4341-a173-725a85c1f0d2","Type":"ContainerStarted","Data":"d05e37e7081cdd970393e37320a02beac918a818bf16915ff71656467471a497"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.482648 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.527899 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563309 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563361 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.563384 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.574162 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.607267 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.645635 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665965 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665975 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.665999 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.684346 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.724390 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768531 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768566 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768577 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768605 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.768834 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.804264 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.842747 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871334 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.871343 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.887079 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.925996 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.972405 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:22 crc kubenswrapper[4902]: I0121 14:34:22.974384 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:22Z","lastTransitionTime":"2026-01-21T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.008683 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.043793 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077563 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077577 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.077662 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.083381 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.127590 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180644 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180692 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.180745 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.252697 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:23:49.653877099 +0000 UTC Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.283720 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.293847 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.293870 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.293854 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:23 crc kubenswrapper[4902]: E0121 14:34:23.293971 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:23 crc kubenswrapper[4902]: E0121 14:34:23.294023 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:23 crc kubenswrapper[4902]: E0121 14:34:23.294107 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386241 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386267 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.386278 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.486389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.489988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.490012 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.490862 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f" exitCode=0 Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.490915 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.509002 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.524647 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.543096 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.558400 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.574950 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.590200 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.592510 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.604845 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.622060 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.637419 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.650056 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.673033 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698610 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698660 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.698708 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.723219 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.740825 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.759098 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.773570 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801623 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801660 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.801700 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904739 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904755 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:23 crc kubenswrapper[4902]: I0121 14:34:23.904766 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:23Z","lastTransitionTime":"2026-01-21T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007510 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007543 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007568 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.007579 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.109906 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.109979 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.110003 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.110033 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.110101 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213108 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.213135 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.253015 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:41:48.95167179 +0000 UTC Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315645 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.315755 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418329 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418353 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.418369 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.502832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerStarted","Data":"26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520854 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520908 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520925 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.520973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.524387 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.541961 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.559403 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.573876 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.589983 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.605646 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624199 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624209 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624223 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.624233 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.630392 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.646464 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.663520 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.680012 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.695134 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.719597 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.726654 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.744197 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.764790 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.784506 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.829971 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.929597 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:24 crc kubenswrapper[4902]: E0121 14:34:24.929867 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:32.92983689 +0000 UTC m=+35.006669919 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932682 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:24 crc kubenswrapper[4902]: I0121 14:34:24.932693 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:24Z","lastTransitionTime":"2026-01-21T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.031475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031613 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031749 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.031734846 +0000 UTC m=+35.108567865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031940 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031979 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.031999 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.032077 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032080 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.032034345 +0000 UTC m=+35.108867414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.032114 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.032134 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032212 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032228 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032237 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032264 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.032255451 +0000 UTC m=+35.109088470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032301 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.032326 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.032319053 +0000 UTC m=+35.109152082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036340 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.036413 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139517 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.139623 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242750 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242831 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.242878 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.253540 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:23:00.154338694 +0000 UTC Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.294166 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.294207 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.294408 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.294550 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:25 crc kubenswrapper[4902]: E0121 14:34:25.294722 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344815 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344853 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344866 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.344899 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448237 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.448396 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.512308 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870" exitCode=0 Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.512379 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.534846 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552266 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552274 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.552305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.556928 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.579735 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.597153 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.611538 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.628348 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.651290 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.655921 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.665774 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.680788 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.696126 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.712459 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.737401 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.753125 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.758773 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.767150 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.780945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:25Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861861 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861901 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.861927 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.964878 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965186 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:25 crc kubenswrapper[4902]: I0121 14:34:25.965211 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:25Z","lastTransitionTime":"2026-01-21T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067829 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067867 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.067886 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170754 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.170763 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.254127 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:34:06.114480974 +0000 UTC Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273446 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.273494 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.376575 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479511 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.479521 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.521372 4902 generic.go:334] "Generic (PLEG): container finished" podID="7dbee8a9-6952-46b5-a958-ff8f1847fabd" containerID="99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93" exitCode=0 Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.521489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerDied","Data":"99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.528419 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.528855 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.528896 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.529019 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.557389 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.563113 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.574121 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.581636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582591 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582665 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.582746 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.597071 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.615004 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.629137 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.647566 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.661890 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.675447 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685286 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685485 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.685519 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.690962 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.706369 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.721907 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.735916 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.749354 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.762309 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.774914 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.787961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.788054 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.793866 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.808451 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.822542 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.834578 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.846062 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.861590 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.878918 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.890933 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.891350 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.904063 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.917474 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.931476 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.950278 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.964275 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.977885 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.992825 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993775 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:26 crc kubenswrapper[4902]: I0121 14:34:26.993787 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:26Z","lastTransitionTime":"2026-01-21T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096748 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.096777 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.177332 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.193597 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.199178 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.209693 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.223718 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.239027 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.253287 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.254359 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:58:16.887998491 +0000 UTC Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.269282 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.287891 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.294884 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.294906 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:27 crc kubenswrapper[4902]: E0121 14:34:27.295076 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:27 crc kubenswrapper[4902]: E0121 14:34:27.295144 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.294911 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:27 crc kubenswrapper[4902]: E0121 14:34:27.295223 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301834 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.301858 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.311558 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.327178 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.346343 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.366844 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.381879 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405810 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405838 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.405938 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.406561 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.423704 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.436598 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508683 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508753 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.508793 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.537457 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" event={"ID":"7dbee8a9-6952-46b5-a958-ff8f1847fabd","Type":"ContainerStarted","Data":"5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.564856 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.587073 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.609009 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612434 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.612515 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.632299 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.647289 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.672534 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.689403 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.708370 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.715481 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.730487 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.744797 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.766096 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.784144 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.797791 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.808507 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817427 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.817502 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.822104 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920313 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:27 crc kubenswrapper[4902]: I0121 14:34:27.920397 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:27Z","lastTransitionTime":"2026-01-21T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.022973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023025 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023056 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.023066 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126417 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.126525 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.228998 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.255460 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:11:40.141525214 +0000 UTC Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.316254 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.331482 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332402 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332418 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.332455 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.346166 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.363422 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.380288 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434133 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434144 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.434171 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.443360 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.455989 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.467281 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.479559 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.494492 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.507086 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.519480 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.533766 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537116 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.537442 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.541898 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/0.log" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.544770 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250" exitCode=1 Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.544917 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.545946 4902 scope.go:117] "RemoveContainer" containerID="14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.551606 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.564902 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.578875 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.592060 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.608551 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.623383 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.635659 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639671 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.639999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.640103 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.648933 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.696635 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.731428 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.744892 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746979 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.746999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.747012 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.760163 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.773757 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.793847 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.816831 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.831318 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.844561 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.849736 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952665 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952674 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:28 crc kubenswrapper[4902]: I0121 14:34:28.952700 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:28Z","lastTransitionTime":"2026-01-21T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.055960 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.158198 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.256448 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:46:53.682575152 +0000 UTC Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.265797 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266176 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266198 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.266214 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.294173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.294219 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:29 crc kubenswrapper[4902]: E0121 14:34:29.294342 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.294374 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:29 crc kubenswrapper[4902]: E0121 14:34:29.294483 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:29 crc kubenswrapper[4902]: E0121 14:34:29.294562 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369731 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369746 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.369757 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472635 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472661 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.472682 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.551399 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/0.log" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.555647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.556375 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575692 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575735 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575762 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.575774 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.576212 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.597368 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.621227 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.635570 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.650740 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.669404 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.678934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.678993 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.679007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.679060 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.679078 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.693904 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.712272 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.726396 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.742937 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.761349 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781750 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781825 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781846 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781878 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.781903 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.786758 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.803681 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.815761 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.829289 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884947 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.884997 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.885021 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987776 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987879 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987912 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:29 crc kubenswrapper[4902]: I0121 14:34:29.987938 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:29Z","lastTransitionTime":"2026-01-21T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091603 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.091627 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195430 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195459 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.195473 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.257674 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:32:48.57644292 +0000 UTC Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298526 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.298556 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401864 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.401905 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.505285 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609600 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.609623 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.712960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713019 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713109 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.713124 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815409 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.815517 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:30 crc kubenswrapper[4902]: I0121 14:34:30.918526 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:30Z","lastTransitionTime":"2026-01-21T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.020970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021074 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.021087 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125252 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.125382 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228893 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.228939 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.258561 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:18:24.823850548 +0000 UTC Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.265904 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw"] Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.266431 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.269570 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.272377 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.285464 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.294557 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.294723 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.294994 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.295184 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.295656 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.295729 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.302298 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.321734 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.331625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.331811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.331933 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.332104 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.332225 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.339494 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.358201 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.374636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.392362 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.401854 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f00b2c1e-2662-466e-b936-05f43db67fec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.401926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.402067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4sj6\" (UniqueName: \"kubernetes.io/projected/f00b2c1e-2662-466e-b936-05f43db67fec-kube-api-access-p4sj6\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.402150 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403191 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.403291 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.409349 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.419100 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.421252 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423685 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423717 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.423742 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.441174 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.445820 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446307 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.446328 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.466092 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470965 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.470975 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.478491 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.483057 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487418 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.487514 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.496078 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.500566 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.500853 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502534 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502628 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502676 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502737 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f00b2c1e-2662-466e-b936-05f43db67fec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502760 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502780 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4sj6\" (UniqueName: \"kubernetes.io/projected/f00b2c1e-2662-466e-b936-05f43db67fec-kube-api-access-p4sj6\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502711 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.502990 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.503590 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.503632 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f00b2c1e-2662-466e-b936-05f43db67fec-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.509657 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.511454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f00b2c1e-2662-466e-b936-05f43db67fec-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.520759 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4sj6\" (UniqueName: \"kubernetes.io/projected/f00b2c1e-2662-466e-b936-05f43db67fec-kube-api-access-p4sj6\") pod \"ovnkube-control-plane-749d76644c-mpqkw\" (UID: \"f00b2c1e-2662-466e-b936-05f43db67fec\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.523489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.537603 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.555232 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.563487 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.564450 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/0.log" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.567147 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" exitCode=1 Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.567183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.567234 4902 scope.go:117] "RemoveContainer" containerID="14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.569902 4902 scope.go:117] "RemoveContainer" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" Jan 21 14:34:31 crc kubenswrapper[4902]: E0121 14:34:31.570223 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.584864 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.588708 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.598102 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: W0121 14:34:31.606581 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00b2c1e_2662_466e_b936_05f43db67fec.slice/crio-2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155 WatchSource:0}: Error finding container 2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155: Status 404 returned error can't find the container with id 2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155 Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.607727 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.607849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.607949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.608032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.608140 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.613847 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.627701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.647591 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.670702 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.684400 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.695979 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.709655 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.710855 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.722473 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.736823 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.751371 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.765934 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.779358 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.791557 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.810505 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813379 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813416 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.813451 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916333 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:31 crc kubenswrapper[4902]: I0121 14:34:31.916395 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:31Z","lastTransitionTime":"2026-01-21T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019327 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019380 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.019392 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123812 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.123866 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227071 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.227146 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.259145 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:49:19.693132728 +0000 UTC Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330449 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330517 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.330556 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.391626 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kq588"] Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.392092 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: E0121 14:34:32.392154 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.413246 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.428694 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432698 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432737 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.432752 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.445955 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.465151 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.479350 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.510722 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.516465 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.516516 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh22z\" (UniqueName: \"kubernetes.io/projected/05d94e6a-249a-484c-8895-085e81f1dfaa-kube-api-access-wh22z\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.522826 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.533708 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535186 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535244 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.535255 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.547368 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.558332 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.569057 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.570597 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" event={"ID":"f00b2c1e-2662-466e-b936-05f43db67fec","Type":"ContainerStarted","Data":"2980df735d450d2bb49d80b102f7391b6e3ce04ca275525a69e4d76b33131155"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.583215 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.595073 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.609976 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.617499 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.617536 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh22z\" (UniqueName: \"kubernetes.io/projected/05d94e6a-249a-484c-8895-085e81f1dfaa-kube-api-access-wh22z\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: E0121 14:34:32.617731 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:32 crc kubenswrapper[4902]: E0121 14:34:32.617876 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:33.117843483 +0000 UTC m=+35.194676692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.626534 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.634980 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh22z\" (UniqueName: \"kubernetes.io/projected/05d94e6a-249a-484c-8895-085e81f1dfaa-kube-api-access-wh22z\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640825 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640885 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640903 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.640946 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.643910 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.666966 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:32Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743881 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.743979 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.846847 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.847573 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951108 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:32 crc kubenswrapper[4902]: I0121 14:34:32.951170 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:32Z","lastTransitionTime":"2026-01-21T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.022756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.023078 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.023029369 +0000 UTC m=+51.099862418 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054912 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.054997 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124199 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124248 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124276 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.124319 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124382 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124405 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124411 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124444 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124428651 +0000 UTC m=+51.201261680 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124469 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124510 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124535 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124487 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:34.124473642 +0000 UTC m=+36.201306671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124627 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124602166 +0000 UTC m=+51.201435415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124626 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124686 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124656 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124648247 +0000 UTC m=+51.201481506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124704 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.124817 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:34:49.124785101 +0000 UTC m=+51.201618160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157797 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.157888 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.259827 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:32:06.94026988 +0000 UTC Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261377 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261394 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.261428 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.294706 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.294757 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.294830 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.294851 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.294994 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:33 crc kubenswrapper[4902]: E0121 14:34:33.295199 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364856 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364898 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.364939 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467326 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467387 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.467397 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570381 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.570392 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.575141 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.580831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" event={"ID":"f00b2c1e-2662-466e-b936-05f43db67fec","Type":"ContainerStarted","Data":"baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.580900 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" event={"ID":"f00b2c1e-2662-466e-b936-05f43db67fec","Type":"ContainerStarted","Data":"4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.596035 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.609409 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.621902 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.636495 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.654808 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.672960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.673142 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.676216 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.689719 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.713291 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.728142 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.758317 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776663 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776683 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.776728 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.779882 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.797637 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.812339 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.823747 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.840913 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.861074 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.875876 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879621 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879662 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.879673 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:33 crc kubenswrapper[4902]: I0121 14:34:33.982178 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:33Z","lastTransitionTime":"2026-01-21T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.084947 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.134519 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:34 crc kubenswrapper[4902]: E0121 14:34:34.134763 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:34 crc kubenswrapper[4902]: E0121 14:34:34.134881 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:36.134852799 +0000 UTC m=+38.211685868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187899 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.187993 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.261112 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:54:16.882051799 +0000 UTC Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290393 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.290494 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.294816 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:34 crc kubenswrapper[4902]: E0121 14:34:34.295010 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.393939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.393985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.393998 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.394017 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.394031 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.496913 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.496977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.496988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.497006 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.497019 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.599928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.599984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.599996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.600013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.600027 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.702730 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807008 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807165 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.807187 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:34 crc kubenswrapper[4902]: I0121 14:34:34.910508 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:34Z","lastTransitionTime":"2026-01-21T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013347 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.013922 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.117956 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118070 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.118119 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221299 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221357 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.221406 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.261520 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:05:10.096984402 +0000 UTC Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.294082 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.294105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:35 crc kubenswrapper[4902]: E0121 14:34:35.294206 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.294237 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:35 crc kubenswrapper[4902]: E0121 14:34:35.294344 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:35 crc kubenswrapper[4902]: E0121 14:34:35.294558 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.324802 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325505 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.325629 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.428984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429094 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.429179 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.531977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532533 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.532617 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.635983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.636002 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739601 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.739656 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843167 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843289 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.843305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947273 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947319 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:35 crc kubenswrapper[4902]: I0121 14:34:35.947337 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:35Z","lastTransitionTime":"2026-01-21T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.050365 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.050701 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.050878 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.051004 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.051220 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.154659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.155817 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.159419 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:36 crc kubenswrapper[4902]: E0121 14:34:36.159664 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:36 crc kubenswrapper[4902]: E0121 14:34:36.159774 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:40.159744329 +0000 UTC m=+42.236577398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.258695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.259233 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.262090 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:03:20.757551386 +0000 UTC Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.294825 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:36 crc kubenswrapper[4902]: E0121 14:34:36.295080 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.362203 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464459 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464513 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.464571 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567858 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.567988 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671608 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.671629 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.774948 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878495 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878528 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.878635 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:36 crc kubenswrapper[4902]: I0121 14:34:36.982707 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:36Z","lastTransitionTime":"2026-01-21T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.086863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.086996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.087072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.087169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.087196 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.190992 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.191010 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.262919 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:21:36.620908961 +0000 UTC Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294082 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294182 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294215 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294280 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: E0121 14:34:37.294368 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.294451 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:37 crc kubenswrapper[4902]: E0121 14:34:37.294645 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:37 crc kubenswrapper[4902]: E0121 14:34:37.294802 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398276 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.398304 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.501946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502005 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502024 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502089 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.502126 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606148 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.606199 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709754 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709790 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.709807 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812622 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.812670 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915539 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:37 crc kubenswrapper[4902]: I0121 14:34:37.915625 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:37Z","lastTransitionTime":"2026-01-21T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019009 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019070 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019080 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.019108 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.121996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.122006 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224856 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.224984 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.264016 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:08:36.881962447 +0000 UTC Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.295083 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:38 crc kubenswrapper[4902]: E0121 14:34:38.295246 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.313895 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328716 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.328784 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.332937 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.353586 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.382711 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.404629 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.427457 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431404 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431433 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.431476 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.445580 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.460147 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.476253 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.495881 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.510655 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.533897 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14dc3aa08dabae09044e1c7d19447f77268d775b3f6a4021c40ab28615358250\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:28Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:34:28.257205 6159 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:34:28.257243 6159 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 14:34:28.257251 6159 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 14:34:28.257264 6159 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 14:34:28.257283 6159 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:34:28.257289 6159 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:34:28.257314 6159 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:34:28.257426 6159 factory.go:656] Stopping watch factory\\\\nI0121 14:34:28.257445 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:28.257342 6159 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 14:34:28.257476 6159 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:28.257483 6159 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:34:28.257493 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:28.257569 6159 ovnkube.go:137] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534298 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.534310 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.559693 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.580037 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.598391 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.616028 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.629679 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.636952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637069 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.637119 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740881 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.740921 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844271 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844326 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.844388 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947405 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:38 crc kubenswrapper[4902]: I0121 14:34:38.947461 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:38Z","lastTransitionTime":"2026-01-21T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051197 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051230 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.051283 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.155148 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258647 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.258687 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.264697 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:57:03.840094141 +0000 UTC Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.294140 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:39 crc kubenswrapper[4902]: E0121 14:34:39.294279 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.294609 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:39 crc kubenswrapper[4902]: E0121 14:34:39.294732 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.294739 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:39 crc kubenswrapper[4902]: E0121 14:34:39.294811 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360655 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360740 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.360802 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463410 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463518 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.463539 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.565988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566056 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.566084 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669454 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.669495 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776502 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.776571 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.879620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.880444 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.982971 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:39 crc kubenswrapper[4902]: I0121 14:34:39.983077 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:39Z","lastTransitionTime":"2026-01-21T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086260 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.086390 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189422 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189652 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.189724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.205986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:40 crc kubenswrapper[4902]: E0121 14:34:40.206426 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:40 crc kubenswrapper[4902]: E0121 14:34:40.206694 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:34:48.206659979 +0000 UTC m=+50.283493038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.265228 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:08:47.12303033 +0000 UTC Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292574 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.292618 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.294741 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:40 crc kubenswrapper[4902]: E0121 14:34:40.294939 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395381 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395453 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.395501 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497561 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.497575 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600701 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.600714 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.703668 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806243 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.806293 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909530 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909599 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:40 crc kubenswrapper[4902]: I0121 14:34:40.909663 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:40Z","lastTransitionTime":"2026-01-21T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.012500 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.115658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.115924 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.116037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.116162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.116280 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219611 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.219665 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.266415 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:36:02.791811513 +0000 UTC Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.294805 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.294827 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.294846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.295563 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.295680 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.295321 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.322873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323416 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323522 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.323648 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.426727 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.510486 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.532105 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537235 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.537269 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.554206 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559742 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559802 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559813 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559835 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.559851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.576088 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.580975 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.602405 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606852 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606867 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606913 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.606939 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.625505 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:41 crc kubenswrapper[4902]: E0121 14:34:41.625658 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627589 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627656 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.627669 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731453 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.731519 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834663 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834673 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.834701 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:41 crc kubenswrapper[4902]: I0121 14:34:41.937227 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:41Z","lastTransitionTime":"2026-01-21T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040844 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.040971 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144632 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144650 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144674 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.144694 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.247978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248109 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248148 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.248159 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.267447 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:15:47.746234243 +0000 UTC Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.295003 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:42 crc kubenswrapper[4902]: E0121 14:34:42.295297 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.296132 4902 scope.go:117] "RemoveContainer" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.334939 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.350989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.351162 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.358533 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.378807 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.398268 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.420581 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.438793 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.454835 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.457266 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.474507 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.501105 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.517386 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.528457 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.542009 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.552701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557717 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.557751 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.566762 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.584974 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.599580 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.612713 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.617030 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.624572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.624997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.641473 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.654183 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.660138 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.673993 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.692739 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.705669 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.734668 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.755636 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762377 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.762419 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.771242 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.786318 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.799709 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.811716 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.830280 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.852491 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.865613 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.867651 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.881723 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.894301 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.905515 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:42Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:42 crc kubenswrapper[4902]: I0121 14:34:42.968857 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:42Z","lastTransitionTime":"2026-01-21T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.072516 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175637 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175713 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.175726 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.268300 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:37:47.244109875 +0000 UTC Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.278972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.279357 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.294493 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.294543 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.294664 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.294826 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.294696 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.294978 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382456 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382482 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.382503 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486313 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.486325 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589339 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589352 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589368 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.589380 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.631362 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.632799 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/1.log" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.636783 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" exitCode=1 Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.636842 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.636891 4902 scope.go:117] "RemoveContainer" containerID="c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.637961 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:34:43 crc kubenswrapper[4902]: E0121 14:34:43.639385 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.654730 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.667646 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.680195 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.692671 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.692970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.693078 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.693160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.693245 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.695763 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.716294 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.728449 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.754227 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.771781 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.788682 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795827 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795929 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.795988 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.805905 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.822542 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.849692 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.867838 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.885740 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.898962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899028 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.899117 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:43Z","lastTransitionTime":"2026-01-21T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.906357 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.921758 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:43 crc kubenswrapper[4902]: I0121 14:34:43.949144 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04a6f46f70e88b93f73d44608144d0834215190222d4f504853060fa40dc1ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:30Z\\\",\\\"message\\\":\\\":443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 14:34:29.333853 6305 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:34:29.333878 6305 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 14:34:29.333891 6305 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:34:29.333901 6305 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:34:29.333966 6305 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:29Z is after 2025-08-24T17:21:41Z]\\\\nI0121 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.002930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.002987 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.003000 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.003022 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.003039 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106749 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.106870 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210493 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.210536 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.269370 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:10:43.611320112 +0000 UTC Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.294204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:44 crc kubenswrapper[4902]: E0121 14:34:44.294486 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.313902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.313986 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.314009 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.314088 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.314116 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.417516 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.520929 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521019 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521155 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.521178 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625000 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.625171 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.643424 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.648575 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:34:44 crc kubenswrapper[4902]: E0121 14:34:44.648763 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.665424 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.682324 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.705464 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.723194 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.728608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.743962 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.763449 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.782963 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.796603 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.808181 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.823995 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831365 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831431 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831455 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.831470 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.838015 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.861776 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.882837 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.907722 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.927341 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934003 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934085 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.934162 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:44Z","lastTransitionTime":"2026-01-21T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.940854 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:44 crc kubenswrapper[4902]: I0121 14:34:44.972923 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037819 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.037835 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140652 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140732 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140751 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.140801 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243579 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.243724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.270212 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:26:55.123853126 +0000 UTC Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.294891 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.295084 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:45 crc kubenswrapper[4902]: E0121 14:34:45.295147 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:45 crc kubenswrapper[4902]: E0121 14:34:45.295373 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.295569 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:45 crc kubenswrapper[4902]: E0121 14:34:45.295714 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347569 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347592 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.347649 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451574 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451651 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451669 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.451712 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555332 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.555342 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658320 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.658330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761727 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761807 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761826 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.761839 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865661 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.865805 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.968930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:45 crc kubenswrapper[4902]: I0121 14:34:45.969189 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:45Z","lastTransitionTime":"2026-01-21T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.072758 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175455 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175477 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.175532 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.271227 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:51:12.351134791 +0000 UTC Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279274 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279299 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.279354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.309762 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:46 crc kubenswrapper[4902]: E0121 14:34:46.310099 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382353 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.382381 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485709 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485781 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.485847 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.590294 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.667258 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.683885 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.687350 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693779 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693805 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.693824 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.704824 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.724991 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.744583 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.762804 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.779794 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796574 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796600 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.796618 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.808492 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.844119 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.863987 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.882777 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899461 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899518 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899563 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.899581 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:46Z","lastTransitionTime":"2026-01-21T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.905720 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.924455 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.954670 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.971665 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:46 crc kubenswrapper[4902]: I0121 14:34:46.987771 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:46Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.002768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003680 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.003890 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.010767 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:47Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.029581 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:47Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107419 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107514 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.107536 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.210977 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211142 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.211165 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.271718 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:23:27.474577242 +0000 UTC Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.294544 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.294609 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.294636 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:47 crc kubenswrapper[4902]: E0121 14:34:47.294729 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:47 crc kubenswrapper[4902]: E0121 14:34:47.294897 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:47 crc kubenswrapper[4902]: E0121 14:34:47.295116 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315078 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315144 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315170 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315200 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.315223 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422525 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.422540 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524591 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524686 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.524716 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.628563 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.731900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.731978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.731995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.732020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.732038 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.835910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.835971 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.835990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.836014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.836032 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939258 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939317 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939333 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:47 crc kubenswrapper[4902]: I0121 14:34:47.939345 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:47Z","lastTransitionTime":"2026-01-21T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.042212 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.146252 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249224 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.249237 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.272464 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:53:03.438077509 +0000 UTC Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.293953 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:48 crc kubenswrapper[4902]: E0121 14:34:48.294167 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.299172 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:48 crc kubenswrapper[4902]: E0121 14:34:48.299380 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:48 crc kubenswrapper[4902]: E0121 14:34:48.299453 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:35:04.299431366 +0000 UTC m=+66.376264425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.316942 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.337225 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.351589 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.372093 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.400228 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.423665 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.436426 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454137 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.454171 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.459138 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.474423 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.490189 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.503036 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.547868 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556523 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.556563 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.570752 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.582469 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.602283 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.613571 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.626116 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.637893 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.650400 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.658926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.659382 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.762251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.762625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.762851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.763139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.763354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.866580 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.866887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.866987 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.867139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.867296 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969817 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969937 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:48 crc kubenswrapper[4902]: I0121 14:34:48.969993 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:48Z","lastTransitionTime":"2026-01-21T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073282 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073306 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.073323 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.108312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.108518 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.10848391 +0000 UTC m=+83.185316979 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176830 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176883 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.176940 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209805 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209896 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209947 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.209984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210097 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210204 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210266 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210285 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210301 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210372 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.21017471 +0000 UTC m=+83.287007769 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210401 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210462 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210479 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210418 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.210398807 +0000 UTC m=+83.287231876 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210575 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.210553751 +0000 UTC m=+83.287386970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.210873 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:35:21.21086024 +0000 UTC m=+83.287693269 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.272578 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:19:21.021485001 +0000 UTC Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.279373 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.293926 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.293982 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.294177 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.294332 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.294467 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:49 crc kubenswrapper[4902]: E0121 14:34:49.295033 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381879 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.381997 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.484871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485228 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.485667 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588732 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.588807 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691394 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.691428 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794170 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794222 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.794267 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897292 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:49 crc kubenswrapper[4902]: I0121 14:34:49.897375 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:49Z","lastTransitionTime":"2026-01-21T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.000886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.001173 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103461 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.103471 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.206590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207339 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.207369 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.274039 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:08:33.218483887 +0000 UTC Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.294801 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:50 crc kubenswrapper[4902]: E0121 14:34:50.295026 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.310658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.311576 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415651 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.415672 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518531 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518543 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.518574 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.620905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621089 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.621150 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724535 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.724585 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827298 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.827420 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930404 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930523 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:50 crc kubenswrapper[4902]: I0121 14:34:50.930546 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:50Z","lastTransitionTime":"2026-01-21T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.033131 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136439 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136548 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136580 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.136608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239577 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.239586 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.274434 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:32:35.978644298 +0000 UTC Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.294905 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.294966 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.294917 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.295155 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.295248 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.295311 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343280 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.343304 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446317 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446339 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.446350 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549168 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549185 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549208 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.549225 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652606 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.652634 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716406 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.716615 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.730402 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734836 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734929 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.734946 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.750174 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.753950 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.769958 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.777454 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.792448 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795874 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795924 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.795951 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.813790 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:51Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:51 crc kubenswrapper[4902]: E0121 14:34:51.813970 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815853 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.815881 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918244 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918278 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:51 crc kubenswrapper[4902]: I0121 14:34:51.918299 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:51Z","lastTransitionTime":"2026-01-21T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.021800 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.124955 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125031 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.125156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227885 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227956 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.227991 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.275328 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:15:56.212537493 +0000 UTC Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.294768 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:52 crc kubenswrapper[4902]: E0121 14:34:52.294947 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330488 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330516 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.330529 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434165 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.434251 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538124 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.538163 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.641387 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743891 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743921 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.743936 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.847837 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950517 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:52 crc kubenswrapper[4902]: I0121 14:34:52.950587 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:52Z","lastTransitionTime":"2026-01-21T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054124 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054213 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.054222 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183173 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183197 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.183214 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.276022 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:13:58.22597301 +0000 UTC Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285460 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285494 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285527 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.285536 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.294842 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:53 crc kubenswrapper[4902]: E0121 14:34:53.295035 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.295078 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.295164 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:53 crc kubenswrapper[4902]: E0121 14:34:53.295259 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:53 crc kubenswrapper[4902]: E0121 14:34:53.295396 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388527 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388572 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.388596 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.492522 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595525 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.595630 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698815 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698887 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.698928 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.802513 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905208 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:53 crc kubenswrapper[4902]: I0121 14:34:53.905238 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:53Z","lastTransitionTime":"2026-01-21T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.008963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.009083 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.112492 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215530 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.215561 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.276414 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:14:10.761399868 +0000 UTC Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.294250 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:54 crc kubenswrapper[4902]: E0121 14:34:54.294401 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.317989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.318000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421561 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.421603 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524686 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524727 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524741 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524763 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.524779 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627806 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.627834 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730183 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.730196 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833441 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833487 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.833525 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936490 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936571 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936597 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:54 crc kubenswrapper[4902]: I0121 14:34:54.936614 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:54Z","lastTransitionTime":"2026-01-21T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.039396 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141880 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141955 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.141968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.244622 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.276997 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:17:22.873603895 +0000 UTC Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.294605 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.294632 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.294658 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:55 crc kubenswrapper[4902]: E0121 14:34:55.294769 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:55 crc kubenswrapper[4902]: E0121 14:34:55.294820 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:55 crc kubenswrapper[4902]: E0121 14:34:55.294880 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346777 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.346786 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.449641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450003 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450456 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.450634 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553676 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.553726 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657255 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657323 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657386 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.657407 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760645 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760747 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.760801 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.864618 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968566 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:55 crc kubenswrapper[4902]: I0121 14:34:55.968615 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:55Z","lastTransitionTime":"2026-01-21T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.071966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072071 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.072139 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.176152 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.277278 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:37:06.060387284 +0000 UTC Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279193 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279279 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.279319 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.294602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:56 crc kubenswrapper[4902]: E0121 14:34:56.294870 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382215 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382227 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.382263 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485656 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485801 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.485825 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589347 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.589486 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693637 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.693657 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796866 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.796980 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.797000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899783 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:56 crc kubenswrapper[4902]: I0121 14:34:56.899829 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:56Z","lastTransitionTime":"2026-01-21T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.001907 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.001949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.001983 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.002004 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.002017 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104771 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.104786 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.207946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208021 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208080 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.208096 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.278027 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:46:58.23686725 +0000 UTC Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.294620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.294671 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:57 crc kubenswrapper[4902]: E0121 14:34:57.294752 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:57 crc kubenswrapper[4902]: E0121 14:34:57.294827 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.294871 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:57 crc kubenswrapper[4902]: E0121 14:34:57.295228 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.310981 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413834 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.413934 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516388 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.516397 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.619828 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722483 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.722496 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825353 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.825394 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928807 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928819 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:57 crc kubenswrapper[4902]: I0121 14:34:57.928861 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:57Z","lastTransitionTime":"2026-01-21T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031862 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.031902 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134573 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134611 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134622 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134640 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.134651 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238181 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.238192 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.278200 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:37:22.473047808 +0000 UTC Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.293899 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:34:58 crc kubenswrapper[4902]: E0121 14:34:58.294136 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.318653 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.334336 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340489 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340525 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.340562 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.350301 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.365180 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.380902 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.395473 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.410237 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.423242 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.438698 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443597 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443616 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443637 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.443651 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.457887 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.480333 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.493663 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.511013 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.528392 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.541867 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.546648 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.559822 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.574454 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.585297 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648779 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648794 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648816 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.648832 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751732 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751742 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751757 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.751770 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854700 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854776 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854831 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.854852 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958518 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:58 crc kubenswrapper[4902]: I0121 14:34:58.958538 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:58Z","lastTransitionTime":"2026-01-21T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.060942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.060988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.060998 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.061015 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.061029 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.164917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165028 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.165096 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268719 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268734 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268753 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.268765 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.279189 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:07:08.570709555 +0000 UTC Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.294556 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.294585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.294686 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.295286 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.295417 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.295481 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.295890 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:34:59 crc kubenswrapper[4902]: E0121 14:34:59.296231 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371822 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.371958 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475562 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.475574 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578672 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.578691 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682667 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682682 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682697 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.682706 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786812 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786890 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.786920 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890212 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890237 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.890297 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993624 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:34:59 crc kubenswrapper[4902]: I0121 14:34:59.993724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:34:59Z","lastTransitionTime":"2026-01-21T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096302 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096350 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.096404 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199349 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199428 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.199447 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.280395 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:58:32.625458145 +0000 UTC Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.294803 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:00 crc kubenswrapper[4902]: E0121 14:35:00.294929 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302477 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.302491 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.405989 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509234 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509283 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.509345 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612080 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612140 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.612152 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716437 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716485 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.716503 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819006 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.819111 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922211 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:00 crc kubenswrapper[4902]: I0121 14:35:00.922241 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:00Z","lastTransitionTime":"2026-01-21T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.024795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.025415 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.127915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.128552 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.231538 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.280503 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:02:52.284890511 +0000 UTC Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.294884 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.294931 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:01 crc kubenswrapper[4902]: E0121 14:35:01.295146 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.295204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:01 crc kubenswrapper[4902]: E0121 14:35:01.295297 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:01 crc kubenswrapper[4902]: E0121 14:35:01.295413 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.335508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.335818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.335921 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.336242 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.336355 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.439119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.439615 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.439848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.440018 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.440294 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.543825 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646498 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646587 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.646603 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.748856 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749389 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749570 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.749720 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.852551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.852972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.853294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.853578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.853840 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.956596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.956837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.956934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.957002 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:01 crc kubenswrapper[4902]: I0121 14:35:01.957107 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:01Z","lastTransitionTime":"2026-01-21T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060650 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.060737 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164126 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164180 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.164242 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201491 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.201532 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.213428 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217433 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217493 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.217506 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.228780 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.231964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.231990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.232000 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.232016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.232026 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.242584 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.245989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.246003 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.257724 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261127 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.261187 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.271965 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.272158 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273954 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.273966 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.281274 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:38:27.285973398 +0000 UTC Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.294626 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:02 crc kubenswrapper[4902]: E0121 14:35:02.294746 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377168 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377185 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377206 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.377223 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479441 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.479506 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581807 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581836 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.581849 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.684862 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.787985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.788887 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892278 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892287 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.892311 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995762 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995860 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.995951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:02 crc kubenswrapper[4902]: I0121 14:35:02.996030 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:02Z","lastTransitionTime":"2026-01-21T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103153 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103190 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.103229 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206546 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206603 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.206631 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.281434 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:58:12.272896311 +0000 UTC Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.293983 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.294079 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:03 crc kubenswrapper[4902]: E0121 14:35:03.294202 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.294238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:03 crc kubenswrapper[4902]: E0121 14:35:03.294359 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:03 crc kubenswrapper[4902]: E0121 14:35:03.294461 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308771 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.308805 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410648 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.410663 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514010 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514079 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514095 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.514105 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616283 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616316 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616329 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.616357 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719163 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.719239 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822061 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.822151 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.924969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925037 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925091 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925116 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:03 crc kubenswrapper[4902]: I0121 14:35:03.925133 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:03Z","lastTransitionTime":"2026-01-21T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027533 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.027543 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130668 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.130721 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.232990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233065 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.233078 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.282084 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:28:07.831308217 +0000 UTC Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.294629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:04 crc kubenswrapper[4902]: E0121 14:35:04.294774 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335179 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.335266 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.400567 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:04 crc kubenswrapper[4902]: E0121 14:35:04.400741 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:04 crc kubenswrapper[4902]: E0121 14:35:04.400809 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:35:36.400791737 +0000 UTC m=+98.477624776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438002 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.438132 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540591 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540633 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.540667 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642561 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642594 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642602 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642616 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.642625 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745286 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745393 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.745412 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.847949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848031 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.848131 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950895 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:04 crc kubenswrapper[4902]: I0121 14:35:04.950979 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:04Z","lastTransitionTime":"2026-01-21T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053932 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.053961 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157523 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157573 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157590 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.157600 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260338 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.260350 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.282633 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:32:50.892809826 +0000 UTC Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.293981 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.294029 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.294084 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:05 crc kubenswrapper[4902]: E0121 14:35:05.294111 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:05 crc kubenswrapper[4902]: E0121 14:35:05.294188 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:05 crc kubenswrapper[4902]: E0121 14:35:05.294245 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362783 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.362794 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466150 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.466185 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.568928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.568990 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.568999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.569015 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.569024 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671869 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.671883 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.728967 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/0.log" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.729032 4902 generic.go:334] "Generic (PLEG): container finished" podID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" containerID="801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e" exitCode=1 Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.729114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerDied","Data":"801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.729612 4902 scope.go:117] "RemoveContainer" containerID="801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.743567 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.757273 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.772477 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.774716 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.785659 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.801474 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.814893 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.830540 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.844589 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.862759 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.877522 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.881923 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.902058 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.919912 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.933262 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.946086 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.958055 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.969644 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.979894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.980515 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:05Z","lastTransitionTime":"2026-01-21T14:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.982065 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:05 crc kubenswrapper[4902]: I0121 14:35:05.992265 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:05Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083538 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083584 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.083608 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187126 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187188 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.187223 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.282810 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:24:42.347774877 +0000 UTC Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.290791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291138 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291266 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.291449 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.294238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:06 crc kubenswrapper[4902]: E0121 14:35:06.294477 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.394808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395378 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.395468 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.497739 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.497995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.498106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.498134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.498143 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600490 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600541 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600552 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600568 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.600580 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703259 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703287 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.703299 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.734183 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/0.log" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.734231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.754351 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.774949 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.789705 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.801831 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805704 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.805733 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.815347 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.829150 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.842107 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.859371 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.872694 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.884415 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.903525 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907916 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907978 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.907995 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.908011 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:06Z","lastTransitionTime":"2026-01-21T14:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.918874 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.931417 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.944509 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.956930 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.974753 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.984741 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:06 crc kubenswrapper[4902]: I0121 14:35:06.995147 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:06Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.010968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113262 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113311 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113320 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113337 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.113349 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216548 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216558 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216582 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.216600 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.283555 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:25:26.370509254 +0000 UTC Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.294087 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:07 crc kubenswrapper[4902]: E0121 14:35:07.294203 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.294367 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:07 crc kubenswrapper[4902]: E0121 14:35:07.294416 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.294567 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:07 crc kubenswrapper[4902]: E0121 14:35:07.294748 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319198 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319223 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.319232 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422180 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422191 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.422217 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525431 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525481 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.525511 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628510 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628559 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.628587 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731409 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.731434 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834087 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.834122 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936959 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936972 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.936989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:07 crc kubenswrapper[4902]: I0121 14:35:07.937000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:07Z","lastTransitionTime":"2026-01-21T14:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039317 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039333 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.039344 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141833 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.141859 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243907 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.243968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.284578 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:07:32.757132786 +0000 UTC Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.293896 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:08 crc kubenswrapper[4902]: E0121 14:35:08.294014 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.317816 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.330063 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.343150 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346062 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346122 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346142 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.346158 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.358887 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.374738 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.398348 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.412154 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.423428 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.435547 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.446088 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.448918 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.449296 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.458002 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.470103 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.480446 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.490556 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.501783 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.511382 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.525267 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.534134 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552482 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552507 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.552518 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.654354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756869 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756938 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.756949 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859373 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859438 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.859447 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961703 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:08 crc kubenswrapper[4902]: I0121 14:35:08.961724 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:08Z","lastTransitionTime":"2026-01-21T14:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064920 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064945 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.064962 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177032 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177150 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.177163 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279300 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279309 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.279332 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.285089 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:45:38.032207687 +0000 UTC Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.294236 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.294258 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:09 crc kubenswrapper[4902]: E0121 14:35:09.294354 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.294376 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:09 crc kubenswrapper[4902]: E0121 14:35:09.294482 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:09 crc kubenswrapper[4902]: E0121 14:35:09.294566 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.386347 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489608 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489653 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.489672 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592714 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592728 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.592737 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.694944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.694985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.694997 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.695013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.695025 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797794 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797836 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.797869 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900600 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900610 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:09 crc kubenswrapper[4902]: I0121 14:35:09.900638 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:09Z","lastTransitionTime":"2026-01-21T14:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.003525 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106466 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.106541 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208570 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208611 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208625 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.208655 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.285498 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:14:58.234846109 +0000 UTC Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.294940 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:10 crc kubenswrapper[4902]: E0121 14:35:10.295122 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.310974 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414113 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.414125 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.517197 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619191 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619202 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.619231 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.721877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.721961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.721991 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.722016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.722085 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.825311 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929251 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929319 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:10 crc kubenswrapper[4902]: I0121 14:35:10.929395 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:10Z","lastTransitionTime":"2026-01-21T14:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033569 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033633 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.033669 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136585 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.136616 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240401 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240417 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.240464 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.286160 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:15:55.272225583 +0000 UTC Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.294696 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.294737 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.294799 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:11 crc kubenswrapper[4902]: E0121 14:35:11.294912 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:11 crc kubenswrapper[4902]: E0121 14:35:11.295064 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:11 crc kubenswrapper[4902]: E0121 14:35:11.295158 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.342869 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.445952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.445996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.446007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.446024 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.446036 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548282 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.548330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650689 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650759 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.650794 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754174 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754225 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.754248 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857733 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857786 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857840 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.857864 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960966 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:11 crc kubenswrapper[4902]: I0121 14:35:11.960975 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:11Z","lastTransitionTime":"2026-01-21T14:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063716 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063752 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063773 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.063784 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.166532 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269280 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269323 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269352 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.269364 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.287231 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:08:52.571124904 +0000 UTC Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.294622 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.294771 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298874 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298935 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298951 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.298964 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.311988 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316648 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.316668 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.335968 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343206 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343214 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.343241 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.360792 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365572 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.365631 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.384128 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389400 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389464 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389476 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389516 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.389529 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.404193 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:12Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:12 crc kubenswrapper[4902]: E0121 14:35:12.404317 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406735 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.406799 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509549 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509604 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509618 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.509653 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.612932 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.612985 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.612996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.613011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.613021 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715535 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.715548 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.818851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922883 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922953 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922974 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.922996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:12 crc kubenswrapper[4902]: I0121 14:35:12.923016 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:12Z","lastTransitionTime":"2026-01-21T14:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027406 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.027425 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129739 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.129891 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.232986 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233084 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.233136 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.287834 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:18:36.258076676 +0000 UTC Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.294170 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.294234 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.294225 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:13 crc kubenswrapper[4902]: E0121 14:35:13.294845 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:13 crc kubenswrapper[4902]: E0121 14:35:13.295089 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:13 crc kubenswrapper[4902]: E0121 14:35:13.295262 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.295350 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336475 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336544 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336565 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.336582 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441263 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441427 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.441440 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545622 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545701 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.545735 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648533 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648542 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.648564 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751354 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751381 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.751393 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.759377 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.762703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.763261 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.784132 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.800397 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.812269 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.827792 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.839987 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.853946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.853999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.854010 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.854027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.854075 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.857224 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.869529 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.882724 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.899436 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.912474 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.932140 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.947878 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.956996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957141 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.957160 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:13Z","lastTransitionTime":"2026-01-21T14:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.961489 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:13 crc kubenswrapper[4902]: I0121 14:35:13.984982 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.000887 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:13Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.020413 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.032932 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.043359 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.060111 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163430 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163482 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163497 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.163507 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266563 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266606 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.266629 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.288200 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:36:57.716793265 +0000 UTC Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.294803 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:14 crc kubenswrapper[4902]: E0121 14:35:14.294987 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369513 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.369544 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471742 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471790 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.471832 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574528 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.574556 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677113 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677159 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.677181 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.767864 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.768495 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/2.log" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.771859 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" exitCode=1 Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.771929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.771989 4902 scope.go:117] "RemoveContainer" containerID="c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.772540 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:14 crc kubenswrapper[4902]: E0121 14:35:14.772774 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780749 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780793 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.780818 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.790131 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.803422 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.815147 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.827460 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.839601 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.851318 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.867162 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.880746 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882614 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882710 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.882740 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.893694 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.905745 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.918647 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.931680 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.947838 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.976546 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86032cfd6d0cf5e1b6c0a9c2e41e61449060fb1fd0deb8c72c409b5da01aa57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:34:43Z\\\",\\\"message\\\":\\\" annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:34:43Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:34:43.143831 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw\\\\nI0121 14:34:43.143831 6521 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-kq588 before timer (time: 2026-01-21 14:34:44.616256342 +0000 UTC m=+2.021937169): skip\\\\nI0121 14:34:43.143829 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-m2bnb\\\\nI0121 14:34:43.143839 6521 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143851 6521 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw in node crc\\\\nI0121 14:34:43.143853 6521 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lg6wz\\\\nI0121 14:34:43.143860 6521 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-m2bnb in node crc\\\\nI0121 14:34:43.143863 6521 ovn.go:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985263 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.985362 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:14Z","lastTransitionTime":"2026-01-21T14:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:14 crc kubenswrapper[4902]: I0121 14:35:14.995976 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.007488 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.017138 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.028284 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089400 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089419 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.089429 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.192982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193082 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.193156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.289319 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:04:05.165181666 +0000 UTC Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.294605 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.294637 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.294649 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.294715 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.295122 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.295109 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296188 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296248 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296298 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.296321 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399370 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.399526 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502805 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502948 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.502968 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606195 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606253 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.606277 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714367 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714403 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714412 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714427 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.714439 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.777640 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.782723 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:15 crc kubenswrapper[4902]: E0121 14:35:15.782981 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.797102 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.812229 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816789 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816908 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.816923 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.825435 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.838977 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.850118 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.860234 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.879800 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.893531 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.912420 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922730 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922788 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922802 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922825 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.922844 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:15Z","lastTransitionTime":"2026-01-21T14:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.939357 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.958942 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.977167 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:15 crc kubenswrapper[4902]: I0121 14:35:15.991468 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.021684 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026064 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026109 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026141 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.026155 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.042604 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.057439 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.072446 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.088760 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129062 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129111 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.129155 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.231729 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.232305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.290214 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:04:38.994692461 +0000 UTC Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.293921 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:16 crc kubenswrapper[4902]: E0121 14:35:16.294184 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.309670 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.335554 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.335865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.335965 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.336145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.336264 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439136 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439201 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.439234 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.541989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.542111 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645428 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645495 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.645547 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749167 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.749199 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853244 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853311 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.853364 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956648 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956748 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:16 crc kubenswrapper[4902]: I0121 14:35:16.956778 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:16Z","lastTransitionTime":"2026-01-21T14:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060167 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.060181 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163463 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163556 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.163973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266713 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266794 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.266804 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.291520 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:44:24.372107739 +0000 UTC Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.293950 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:17 crc kubenswrapper[4902]: E0121 14:35:17.294101 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.294356 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.294461 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:17 crc kubenswrapper[4902]: E0121 14:35:17.294700 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:17 crc kubenswrapper[4902]: E0121 14:35:17.294909 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369446 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.369492 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472235 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.472332 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575315 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575325 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.575361 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678408 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.678512 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781321 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781613 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781678 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781748 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.781816 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884142 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884218 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.884232 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987288 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987594 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987780 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:17 crc kubenswrapper[4902]: I0121 14:35:17.987841 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:17Z","lastTransitionTime":"2026-01-21T14:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090582 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090679 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090705 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.090754 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.192994 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193081 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.193110 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.291735 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:08:51.30420266 +0000 UTC Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.294296 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:18 crc kubenswrapper[4902]: E0121 14:35:18.294459 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296665 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296751 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.296815 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.311541 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.336402 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.351030 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.367824 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.394590 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398784 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398852 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398866 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.398904 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.412612 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.426586 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.437670 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.459278 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.480588 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.492996 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502332 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.502531 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.503407 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.522223 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.535781 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.546114 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c6d394d-639a-4b18-9e61-3f28950ff275\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.561589 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.575120 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.590272 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.603009 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604844 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604917 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.604930 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708828 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708943 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.708975 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.709002 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812188 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812267 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.812287 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915627 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915718 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:18 crc kubenswrapper[4902]: I0121 14:35:18.915734 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:18Z","lastTransitionTime":"2026-01-21T14:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018422 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018500 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.018515 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122138 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.122179 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225555 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.225733 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.293276 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:50:31.856510404 +0000 UTC Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.294555 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.294603 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.294747 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:19 crc kubenswrapper[4902]: E0121 14:35:19.294735 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:19 crc kubenswrapper[4902]: E0121 14:35:19.294876 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:19 crc kubenswrapper[4902]: E0121 14:35:19.294964 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338232 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.338253 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.440862 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.440952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.440976 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.441007 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.441029 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.548798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.548853 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.549108 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.549130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.549141 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651817 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651830 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.651840 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755723 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755828 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.755886 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859380 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859415 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.859449 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963134 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:19 crc kubenswrapper[4902]: I0121 14:35:19.963179 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:19Z","lastTransitionTime":"2026-01-21T14:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066342 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066396 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066405 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.066437 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169338 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169418 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169445 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.169502 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.272986 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.273005 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.294572 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:23:54.142742961 +0000 UTC Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.294820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:20 crc kubenswrapper[4902]: E0121 14:35:20.295016 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375629 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375655 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.375666 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478254 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478266 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.478300 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582174 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582260 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582283 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.582300 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684937 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.684979 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788763 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.788847 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891575 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891594 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.891633 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994747 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994774 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:20 crc kubenswrapper[4902]: I0121 14:35:20.994783 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:20Z","lastTransitionTime":"2026-01-21T14:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097112 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097156 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097168 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097184 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.097195 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.182646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.182972 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.182932255 +0000 UTC m=+147.259765344 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199357 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199368 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199385 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.199398 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.283857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.283924 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.283969 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.284011 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284110 4902 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284159 4902 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284189 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284120 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284237 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284253 4902 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284196 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284173152 +0000 UTC m=+147.361006191 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284213 4902 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284389 4902 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284313 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284296856 +0000 UTC m=+147.361129895 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284500 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284479811 +0000 UTC m=+147.361312840 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.284519 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.284508392 +0000 UTC m=+147.361341421 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294755 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294771 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.294904 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294780 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.294766 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:24:05.034192666 +0000 UTC Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.294999 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:21 crc kubenswrapper[4902]: E0121 14:35:21.295078 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.301989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302018 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302028 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302063 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.302076 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.404214 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507258 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507323 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507341 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.507383 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610781 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610835 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610869 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.610882 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714304 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714366 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714387 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714416 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.714438 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816928 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816946 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.816971 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.817093 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:21 crc kubenswrapper[4902]: I0121 14:35:21.920558 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:21Z","lastTransitionTime":"2026-01-21T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.023999 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024100 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024152 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.024172 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126925 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126960 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.126995 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230245 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230296 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230330 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.230343 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.294173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.294369 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.295224 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:23:27.358938444 +0000 UTC Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333662 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333680 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.333715 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.436191 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539707 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539793 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.539807 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643301 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643313 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.643351 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745854 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.745863 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765588 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765629 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765642 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.765667 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.778464 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783099 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.783241 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.794887 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798238 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:22 crc kubenswrapper[4902]: I0121 14:35:22.798288 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:22Z","lastTransitionTime":"2026-01-21T14:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:22 crc kubenswrapper[4902]: E0121 14:35:22.812126 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207643 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207704 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207722 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.207747 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.229096 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235212 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235270 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235287 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.235330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.264888 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.265089 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266769 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266843 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.266886 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.294506 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.294555 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.294637 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.294759 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.294845 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:23 crc kubenswrapper[4902]: E0121 14:35:23.294974 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.295477 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:28:19.730849606 +0000 UTC Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370352 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.370414 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473414 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473473 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.473521 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576273 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.576303 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679521 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679557 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679586 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.679598 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781796 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781833 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781841 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781854 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.781865 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884479 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884545 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884573 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.884582 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987129 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987147 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:23 crc kubenswrapper[4902]: I0121 14:35:23.987159 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:23Z","lastTransitionTime":"2026-01-21T14:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090254 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090276 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.090354 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193820 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.193859 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.294620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:24 crc kubenswrapper[4902]: E0121 14:35:24.294801 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.295635 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:33:12.905032545 +0000 UTC Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296217 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296248 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296261 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296281 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.296293 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398851 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398915 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398934 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398957 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.398974 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502417 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.502473 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616279 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616309 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.616324 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719452 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719508 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719534 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.719576 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.821973 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822312 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822453 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.822546 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926077 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926175 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926205 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:24 crc kubenswrapper[4902]: I0121 14:35:24.926227 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:24Z","lastTransitionTime":"2026-01-21T14:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029239 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029260 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029285 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.029303 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132438 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132490 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.132533 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235755 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235811 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235822 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.235851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.294068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.294134 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.294179 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:25 crc kubenswrapper[4902]: E0121 14:35:25.294225 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:25 crc kubenswrapper[4902]: E0121 14:35:25.294343 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:25 crc kubenswrapper[4902]: E0121 14:35:25.294444 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.296093 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:41:29.504667789 +0000 UTC Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338844 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.338863 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.441980 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442034 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442058 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442072 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.442080 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545818 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545970 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.545984 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649011 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649063 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649074 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649089 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.649100 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.751393 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.751808 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.751930 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.752119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.752222 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855716 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.855749 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958329 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958345 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958369 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:25 crc kubenswrapper[4902]: I0121 14:35:25.958391 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:25Z","lastTransitionTime":"2026-01-21T14:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062292 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062310 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062332 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.062349 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.164710 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268235 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.268257 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.294588 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:26 crc kubenswrapper[4902]: E0121 14:35:26.294820 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.296728 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:51:12.499816645 +0000 UTC Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.371884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.371963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.371991 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.372022 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.372092 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475649 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475779 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.475802 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.578810 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.578876 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.578900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.579016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.579073 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682419 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682499 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682511 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.682561 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785035 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785106 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.785151 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888733 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888833 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888889 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.888911 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992254 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992267 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992288 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:26 crc kubenswrapper[4902]: I0121 14:35:26.992304 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:26Z","lastTransitionTime":"2026-01-21T14:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095932 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.095974 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198578 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198628 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198644 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198668 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.198681 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.293994 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.294070 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:27 crc kubenswrapper[4902]: E0121 14:35:27.294178 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:27 crc kubenswrapper[4902]: E0121 14:35:27.294283 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.294015 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:27 crc kubenswrapper[4902]: E0121 14:35:27.294396 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.297067 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:03:42.764405929 +0000 UTC Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301390 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301434 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.301497 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403870 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403909 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.403951 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.506626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.506941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.507098 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.507240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.507330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610105 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610171 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.610205 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714272 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714327 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714350 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.714364 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.817736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818114 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818208 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818303 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.818377 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.920658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921239 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921338 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:27 crc kubenswrapper[4902]: I0121 14:35:27.921444 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:27Z","lastTransitionTime":"2026-01-21T14:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.024284 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.024706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.024894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.025006 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.025125 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127733 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.127933 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230596 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230682 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230702 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.230715 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.294570 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:28 crc kubenswrapper[4902]: E0121 14:35:28.294796 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.297586 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:43:05.082207595 +0000 UTC Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.309998 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.327181 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333411 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333495 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333519 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.333535 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.342830 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.356945 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.371701 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.386983 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.400789 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.414842 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.429092 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.436956 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.443240 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.457411 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.470466 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.488188 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.506896 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.518649 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.529961 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540059 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540069 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540100 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.540459 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c6d394d-639a-4b18-9e61-3f28950ff275\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.551277 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.564383 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643300 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643347 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643376 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.643417 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745568 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745927 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745939 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745961 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.745973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848314 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848371 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848387 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.848398 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951146 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:28 crc kubenswrapper[4902]: I0121 14:35:28.951156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:28Z","lastTransitionTime":"2026-01-21T14:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.053670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054097 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054307 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.054411 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157911 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157945 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.157956 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.260921 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261015 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261093 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.261157 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.294131 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:29 crc kubenswrapper[4902]: E0121 14:35:29.294325 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.294672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:29 crc kubenswrapper[4902]: E0121 14:35:29.294804 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.295134 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:29 crc kubenswrapper[4902]: E0121 14:35:29.295262 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.298594 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:54:12.774958816 +0000 UTC Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364247 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364328 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364395 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.364422 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467797 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467863 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.467889 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571129 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571192 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571213 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571240 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.571264 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.674459 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.674814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.674954 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.675117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.675240 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778151 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778187 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.778210 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881696 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881754 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.881780 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985012 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985529 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.985849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:29 crc kubenswrapper[4902]: I0121 14:35:29.986032 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:29Z","lastTransitionTime":"2026-01-21T14:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089547 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089605 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089623 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089646 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.089664 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192852 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.192944 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.294564 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:30 crc kubenswrapper[4902]: E0121 14:35:30.294854 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.296693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.297001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.297520 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.297837 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.298320 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.298798 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:22:56.534754375 +0000 UTC Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.401877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.402335 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.402601 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.402857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.403019 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.506492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.506904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.507107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.507489 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.507852 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612182 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612558 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.612974 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.613838 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716398 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716441 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716451 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716467 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.716478 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818832 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818873 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.818901 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921792 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921894 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:30 crc kubenswrapper[4902]: I0121 14:35:30.921976 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:30Z","lastTransitionTime":"2026-01-21T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.024767 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025162 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025263 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025974 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.025993 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128121 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.128131 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.230718 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231360 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231514 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.231647 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.294580 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.294580 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.294629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.295257 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.295148 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.295088 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.295893 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:31 crc kubenswrapper[4902]: E0121 14:35:31.296136 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.300053 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:26:30.303841964 +0000 UTC Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.334715 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335073 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335210 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335364 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.335504 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.438904 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439038 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439079 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.439097 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542607 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542735 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.542804 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.646884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.646967 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.646993 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.647027 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.647116 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.751511 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.854895 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.854969 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.854988 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.855013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.855032 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957576 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957623 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957636 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:31 crc kubenswrapper[4902]: I0121 14:35:31.957666 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:31Z","lastTransitionTime":"2026-01-21T14:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061593 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.061622 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164725 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164776 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164788 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164809 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.164821 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268357 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.268406 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.294349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:32 crc kubenswrapper[4902]: E0121 14:35:32.294485 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.300282 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:22:25.85490037 +0000 UTC Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371609 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371663 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371676 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.371708 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474065 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474113 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474124 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.474156 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576745 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576805 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576814 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576835 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.576846 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679420 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679465 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.679485 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782073 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782120 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782137 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.782149 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.884987 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885271 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885306 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.885330 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988461 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:32 crc kubenswrapper[4902]: I0121 14:35:32.988504 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:32Z","lastTransitionTime":"2026-01-21T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091488 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091571 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091592 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.091605 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194421 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194493 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.194504 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.294204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.294303 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.294338 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.294459 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.294504 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.294553 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297186 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297229 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297243 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.297253 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.300563 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:33:18.727416624 +0000 UTC Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400656 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400695 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400706 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.400737 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.503942 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504013 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504030 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.504099 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576512 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.576546 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.591223 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595207 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595226 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.595238 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.615443 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620740 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.620761 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.638581 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.642968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643002 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643018 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643058 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.643078 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.657862 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662469 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662505 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.662519 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.678255 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9c9a3794-1c52-4324-901d-b93cdd3e411b\\\",\\\"systemUUID\\\":\\\"d49da3a0-cfe2-42f1-9a2f-a7d096ede9c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:33 crc kubenswrapper[4902]: E0121 14:35:33.678378 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680118 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680154 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680163 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680176 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.680185 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782840 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782903 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.782969 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886378 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886504 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.886528 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989638 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989711 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989903 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:33 crc kubenswrapper[4902]: I0121 14:35:33.989935 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:33Z","lastTransitionTime":"2026-01-21T14:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093444 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093454 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093472 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.093482 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195603 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195654 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195668 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.195704 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.294384 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:34 crc kubenswrapper[4902]: E0121 14:35:34.294601 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298760 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298774 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298795 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.298809 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.300679 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:05:41.359993249 +0000 UTC Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401806 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401823 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401848 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.401866 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505620 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505659 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.505709 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607736 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607772 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607783 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607799 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.607808 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711164 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711276 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.711301 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814765 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814782 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814803 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.814822 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917806 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917875 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917899 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917923 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:34 crc kubenswrapper[4902]: I0121 14:35:34.917941 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:34Z","lastTransitionTime":"2026-01-21T14:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.020902 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.020982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.020996 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.021020 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.021036 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124128 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124203 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124215 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124236 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.124251 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227703 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227746 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227775 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227792 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.227803 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.294912 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.295005 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.295082 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:35 crc kubenswrapper[4902]: E0121 14:35:35.295206 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:35 crc kubenswrapper[4902]: E0121 14:35:35.295349 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:35 crc kubenswrapper[4902]: E0121 14:35:35.295489 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.300908 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:02:28.870483496 +0000 UTC Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330539 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330572 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.330586 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433877 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433943 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433962 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.433984 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.434000 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537101 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537122 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.537158 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639859 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639941 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.639952 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.743922 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.744410 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.744691 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.744886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.745103 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853305 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853346 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853368 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.853377 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.956580 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.956926 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.957177 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.957392 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:35 crc kubenswrapper[4902]: I0121 14:35:35.957636 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:35Z","lastTransitionTime":"2026-01-21T14:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061196 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061619 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061699 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.061854 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164720 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164774 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164787 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164804 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.164816 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.266958 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267496 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267675 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.267788 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.294906 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:36 crc kubenswrapper[4902]: E0121 14:35:36.295246 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.301842 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:33:36.756220403 +0000 UTC Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371658 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371724 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371743 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.371777 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475148 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475194 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475220 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.475232 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.480107 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:36 crc kubenswrapper[4902]: E0121 14:35:36.480371 4902 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:36 crc kubenswrapper[4902]: E0121 14:35:36.480506 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs podName:05d94e6a-249a-484c-8895-085e81f1dfaa nodeName:}" failed. No retries permitted until 2026-01-21 14:36:40.480479523 +0000 UTC m=+162.557312552 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs") pod "network-metrics-daemon-kq588" (UID: "05d94e6a-249a-484c-8895-085e81f1dfaa") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577728 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577762 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577770 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.577794 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680402 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680460 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680477 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.680491 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784119 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784135 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.784175 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886560 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886617 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886664 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.886681 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989494 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989506 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989524 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:36 crc kubenswrapper[4902]: I0121 14:35:36.989534 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:36Z","lastTransitionTime":"2026-01-21T14:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092443 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092454 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.092481 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195117 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195160 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195169 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195183 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.195193 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.294220 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.294301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:37 crc kubenswrapper[4902]: E0121 14:35:37.294522 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.294704 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:37 crc kubenswrapper[4902]: E0121 14:35:37.294981 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:37 crc kubenswrapper[4902]: E0121 14:35:37.295195 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.298979 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299026 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299085 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.299099 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.302774 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:12:00.342769747 +0000 UTC Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402407 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402422 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.402457 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.504976 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505029 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505043 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505090 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.505103 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607824 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607881 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607898 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.607913 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710721 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710812 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.710822 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814222 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814350 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814363 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814382 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.814395 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917502 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917553 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917564 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917582 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:37 crc kubenswrapper[4902]: I0121 14:35:37.917594 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:37Z","lastTransitionTime":"2026-01-21T14:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019798 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019849 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019867 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.019880 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.122997 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123064 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123096 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123110 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.123120 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226059 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226115 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226131 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.226141 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.294422 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:38 crc kubenswrapper[4902]: E0121 14:35:38.294845 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.303435 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:23:09.945955645 +0000 UTC Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329336 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329641 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329744 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329845 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.329914 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.331461 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ea6b476-4e82-4f87-88d5-dbd34976357d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a36ef4503b3f3a884aa89f1a86c007e069d46ae4c58201e8455544ae2cd71c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffd8608ac99f9cbe7871bb48f8114b37dfc5ff4b294bba8e0fd6fc8cbd4bc87c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36382fe6801fb7f0c0867d282a5243a59e443592574f8850863764b1c21be360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff4521029a4ea52074d935b3c13c4693a97da974e325b72458be633605221c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ae7bd2dade584a736b0699e2f8bf36f4aa8a0e90e57d3550310e929001472d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f3707b9ca489c65dbc11cd0d78e321786ebcd7086aecd0462ba39464864cab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d8ecd6afd1cb82c8ac3a76d371e1cfd73ac6958b51bedbbaa34d2ee415f8e53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2893320fc19ba001d5b124fca6304c5576ebf19bd9b3b1d49293dbd6c1b28f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.347447 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T14:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 14:34:11.030870 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 14:34:11.032598 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3609112002/tls.crt::/tmp/serving-cert-3609112002/tls.key\\\\\\\"\\\\nI0121 14:34:16.560133 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 14:34:16.563626 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 14:34:16.563645 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 14:34:16.563667 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 14:34:16.563674 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 14:34:16.570635 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 14:34:16.570659 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0121 14:34:16.570658 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 14:34:16.570667 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 14:34:16.570689 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 14:34:16.570694 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 14:34:16.570697 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 14:34:16.570701 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 14:34:16.573765 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.364668 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2361a795-ffa1-4a7c-89c0-98e9431b9e61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb65617b3bddeb2d6cad5f32242672aa241cdc60c9b3dc415903da57901a441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://999ff55e5fec7dfb6dcfd1745ff9f43ce7788bdc5d4931c8b95eb4e66b81bc35\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c56e537d82dbf1c1aaeecc386c7991ed899b464aa5c92247432fd70f25d6197\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.383496 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653cb9d2fcac425c4b5cb70459b45cf6ce5a8cc7205549e2e6197be76b52d6f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.398779 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.417353 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec3a89a-830c-4274-8c1e-bd3c98120708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:14Z\\\",\\\"message\\\":\\\"45-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:35:14.371450 6933 services_controller.go:445] Built service openshift-authentication/oauth-openshift LB template configs for network=default: []services.lbConfig(nil)\\\\nF0121 14:35:14.372204 6933 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:14Z is after 2025-08-24T17:21:41Z]\\\\nI0121 14:35:14.372229 6933 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0121 14:35:14.372206 6933 services_controller.go:451] Built service openshift-authentication/oauth-open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:35:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcxq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8l7jc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433388 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433426 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433453 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.433938 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c6d394d-639a-4b18-9e61-3f28950ff275\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5941fa9b0928cf6a092eda06a1456dc7cc2e20ca9cded4fc963bf722557ddb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bbd2d677787d4c4acc335092c83a711598f506dd9b3d9e967cb27921650973f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.448010 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6c85cc7-ee09-4640-ab22-ce79d086ad7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e150be3d95e61dbf37ffe59dfd8a3f437aad672edb5499f4b8568c2ca0cd4dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4dtf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m2bnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.460155 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lg6wz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f01bb5a-c917-4341-a173-725a85c1f0d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5897bdce95e525d513329e77e834e3e6821f5e937c7c06b38531b79138d6a29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gv4jk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lg6wz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.472890 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mztd6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"037b55cf-cb9e-41ce-8b1e-3898f490a4aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:35:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:35:05Z\\\",\\\"message\\\":\\\"2026-01-21T14:34:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46\\\\n2026-01-21T14:34:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_918e7992-74d0-422c-9573-3e655c770c46 to /host/opt/cni/bin/\\\\n2026-01-21T14:34:20Z [verbose] multus-daemon started\\\\n2026-01-21T14:34:20Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:35:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h7w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mztd6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.487457 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f00b2c1e-2662-466e-b936-05f43db67fec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4baaa2dd3a54b2721168ee2a05932341b68010ea0e08651e9feeb9aef2e61928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baba2af72e87ee2fdb9aff79c11ff0146403a2cbcef5a1de54e3531ea075f1e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4sj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mpqkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.501679 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h68nf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbee8a9-6952-46b5-a958-ff8f1847fabd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f472100524d4d6a9a88249404ac5f5fd4bd17e1312bad54c816937b33e0e1c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffe5f7cca086cd53eee7964aeb74049a70c119e1e10b56eaaddc24535c9cf597\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0832d79118ab1c93103ef08ca89bcc2406f788f610c58c1505742dbfbc57bf30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02790b6feea20ee336386e05f82a19161327cc0de6c88988c0bb80886bfe1475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd771c717e19bcb47a1288d5a471f4219e791c80ce0590bb6cabb0ad7ea50e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26fbb12992f47a1bb5ac0f6b05efd60ca21ae45407cc29367425373ef204f870\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fca169bca50755251da201cd8de20113dbd524c597a10cd2e0bc0b1b167b93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:34:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rr89\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h68nf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.513482 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq588" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d94e6a-249a-484c-8895-085e81f1dfaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wh22z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq588\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.523757 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e90c662-3709-4ec5-8dcd-cd159916c9a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1b4bad85265f15d2d15a1392664127223090ef5a25e0e99ac221baf1f0bfe1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2642d8280b5ab7900096ad18d52cb26533086521d54f01ab26e192327f759c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f59bcdedecf49081493b9e8afa887ae3a75965d72a53f5bab7a6bd002c4d163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96171b8fbe79874cbd81eee676f5237079574ddfcdd5831992a81b41257986e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:33:59Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:33:58Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.535939 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536536 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536626 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536649 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.536663 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.552295 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d0c0e5ae81c901002861b711cd0b758b78242aebd3b0ec002157d0e26bd957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1ef32733cbd9092e4941532f8cee350ce6ed76d38ac774084576057a34250d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.564773 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.577398 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448443987c46d43df802e29211990b7b2c6bcebefc7a9743e96fe5e180ea6b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.590426 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-62549" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f7f4ebe-2b62-4cab-934b-f038b6a05d07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc695093f80acc856c2861b648293151f67f888367b52280ff0bd9f253287ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:34:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:34:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-62549\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:35:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.639694 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640076 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640331 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640551 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.640731 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744083 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744486 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744688 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.744886 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.745025 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847143 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847216 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847249 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.847262 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.950250 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.950690 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.950895 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.951092 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:38 crc kubenswrapper[4902]: I0121 14:35:38.951273 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:38Z","lastTransitionTime":"2026-01-21T14:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.053963 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054036 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054066 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054085 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.054097 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157291 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157344 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157356 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.157388 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261834 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261884 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.261927 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.294515 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.294744 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.294898 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:39 crc kubenswrapper[4902]: E0121 14:35:39.294908 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:39 crc kubenswrapper[4902]: E0121 14:35:39.295453 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:39 crc kubenswrapper[4902]: E0121 14:35:39.295839 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.304115 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:42:35.605703334 +0000 UTC Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365051 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365103 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365145 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.365158 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.468324 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.468865 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.469123 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.469293 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.469472 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.572630 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573204 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573474 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573712 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.573927 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677172 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677231 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677248 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677277 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.677295 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.780435 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.780842 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.781016 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.781219 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.781406 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885355 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885423 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885468 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.885490 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988608 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988670 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988687 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988708 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:39 crc kubenswrapper[4902]: I0121 14:35:39.988720 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:39Z","lastTransitionTime":"2026-01-21T14:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.091871 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.091964 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.091989 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.092023 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.092085 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195383 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195438 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195450 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195472 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.195484 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.294391 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:40 crc kubenswrapper[4902]: E0121 14:35:40.294873 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298440 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298478 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298492 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298515 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.298530 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.305240 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:16:30.099487182 +0000 UTC Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401397 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401447 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401458 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401480 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.401493 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.505910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.505968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.505982 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.506001 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.506013 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609083 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609125 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609139 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.609172 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712199 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712275 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712295 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712322 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.712340 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815892 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.815992 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.816012 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919290 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919362 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919399 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919431 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:40 crc kubenswrapper[4902]: I0121 14:35:40.919452 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:40Z","lastTransitionTime":"2026-01-21T14:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023567 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023639 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023657 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023681 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.023698 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126391 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126436 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126448 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126464 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.126477 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229817 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229898 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229919 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229950 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.229973 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.294546 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:41 crc kubenswrapper[4902]: E0121 14:35:41.294754 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.294831 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:41 crc kubenswrapper[4902]: E0121 14:35:41.294921 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.294994 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:41 crc kubenswrapper[4902]: E0121 14:35:41.295141 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.305473 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:16:53.121627667 +0000 UTC Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333880 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333905 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333949 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.333977 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436872 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436910 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436940 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.436957 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.539746 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540062 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540130 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540193 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.540250 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642731 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642813 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.642858 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745775 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745785 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745800 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.745811 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848537 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848612 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848647 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848677 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.848697 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.952348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.952888 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.953071 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.953264 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:41 crc kubenswrapper[4902]: I0121 14:35:41.953390 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:41Z","lastTransitionTime":"2026-01-21T14:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057198 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057274 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057294 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.057305 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160133 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160189 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160202 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160221 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.160236 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.262756 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263265 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263503 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263738 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.263957 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.294640 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:42 crc kubenswrapper[4902]: E0121 14:35:42.295162 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.306246 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:44:58.622007953 +0000 UTC Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367158 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367233 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367242 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367256 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.367266 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470140 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470241 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470292 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470318 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.470335 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.573446 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.573857 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.574067 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.574247 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.574409 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677734 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677768 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677778 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677792 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.677803 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780598 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780669 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780680 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780693 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.780744 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883424 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883462 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883471 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883484 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.883495 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.986509 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.986900 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.987008 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.987149 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:42 crc kubenswrapper[4902]: I0121 14:35:42.987253 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:42Z","lastTransitionTime":"2026-01-21T14:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090531 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090583 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090595 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090615 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.090627 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194764 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194810 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194821 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194839 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.194851 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.294842 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.294933 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:43 crc kubenswrapper[4902]: E0121 14:35:43.295032 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:43 crc kubenswrapper[4902]: E0121 14:35:43.295250 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.295594 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:43 crc kubenswrapper[4902]: E0121 14:35:43.295964 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297308 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297348 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297358 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297375 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.297387 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.306780 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:51:00.735637707 +0000 UTC Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400056 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400107 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400122 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400140 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.400152 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502374 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502403 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502412 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502425 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.502434 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606372 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606442 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606470 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606505 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.606526 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709791 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709896 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709914 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709947 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.709965 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813897 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813944 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813952 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813968 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.813977 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877014 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877075 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877086 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877102 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.877113 4902 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:35:43Z","lastTransitionTime":"2026-01-21T14:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.995694 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq"] Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.996251 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:43 crc kubenswrapper[4902]: I0121 14:35:43.998373 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.000696 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.002087 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.002939 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.019600 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.019582982 podStartE2EDuration="58.019582982s" podCreationTimestamp="2026-01-21 14:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.019277583 +0000 UTC m=+106.096110612" watchObservedRunningTime="2026-01-21 14:35:44.019582982 +0000 UTC m=+106.096416011" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062861 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8146a15d-15b4-4340-bc59-aa7767cc7977-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062934 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8146a15d-15b4-4340-bc59-aa7767cc7977-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062959 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.062987 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.063005 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8146a15d-15b4-4340-bc59-aa7767cc7977-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.092139 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-62549" podStartSLOduration=86.092116549 podStartE2EDuration="1m26.092116549s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.091793369 +0000 UTC m=+106.168626418" watchObservedRunningTime="2026-01-21 14:35:44.092116549 +0000 UTC m=+106.168949578" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.109263 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-h68nf" podStartSLOduration=86.109238164 podStartE2EDuration="1m26.109238164s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.108163222 +0000 UTC m=+106.184996251" watchObservedRunningTime="2026-01-21 14:35:44.109238164 +0000 UTC m=+106.186071193" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.159455 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.159436905 podStartE2EDuration="1m24.159436905s" podCreationTimestamp="2026-01-21 14:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.159055454 +0000 UTC m=+106.235888483" watchObservedRunningTime="2026-01-21 14:35:44.159436905 +0000 UTC m=+106.236269934" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8146a15d-15b4-4340-bc59-aa7767cc7977-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164713 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8146a15d-15b4-4340-bc59-aa7767cc7977-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8146a15d-15b4-4340-bc59-aa7767cc7977-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164894 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.164975 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8146a15d-15b4-4340-bc59-aa7767cc7977-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.166541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8146a15d-15b4-4340-bc59-aa7767cc7977-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.179295 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8146a15d-15b4-4340-bc59-aa7767cc7977-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.179483 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.179466694 podStartE2EDuration="1m27.179466694s" podCreationTimestamp="2026-01-21 14:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.178174147 +0000 UTC m=+106.255007176" watchObservedRunningTime="2026-01-21 14:35:44.179466694 +0000 UTC m=+106.256299723" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.181912 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8146a15d-15b4-4340-bc59-aa7767cc7977-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7s2jq\" (UID: \"8146a15d-15b4-4340-bc59-aa7767cc7977\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.202712 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.202691865 podStartE2EDuration="1m27.202691865s" podCreationTimestamp="2026-01-21 14:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.202023006 +0000 UTC m=+106.278856035" watchObservedRunningTime="2026-01-21 14:35:44.202691865 +0000 UTC m=+106.279524894" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.271004 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.27097906 podStartE2EDuration="28.27097906s" podCreationTimestamp="2026-01-21 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.269995071 +0000 UTC m=+106.346828100" watchObservedRunningTime="2026-01-21 14:35:44.27097906 +0000 UTC m=+106.347812089" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.292609 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podStartSLOduration=86.292588514 podStartE2EDuration="1m26.292588514s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.281340369 +0000 UTC m=+106.358173388" watchObservedRunningTime="2026-01-21 14:35:44.292588514 +0000 UTC m=+106.369421543" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.294128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:44 crc kubenswrapper[4902]: E0121 14:35:44.294288 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307010 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lg6wz" podStartSLOduration=86.306989441 podStartE2EDuration="1m26.306989441s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.292941095 +0000 UTC m=+106.369774124" watchObservedRunningTime="2026-01-21 14:35:44.306989441 +0000 UTC m=+106.383822480" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307219 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:59:11.1884524 +0000 UTC Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307314 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.307904 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mztd6" podStartSLOduration=86.307897157 podStartE2EDuration="1m26.307897157s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.30730246 +0000 UTC m=+106.384135499" watchObservedRunningTime="2026-01-21 14:35:44.307897157 +0000 UTC m=+106.384730186" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.314738 4902 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.315627 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.319983 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mpqkw" podStartSLOduration=86.319963066 podStartE2EDuration="1m26.319963066s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.319806541 +0000 UTC m=+106.396639580" watchObservedRunningTime="2026-01-21 14:35:44.319963066 +0000 UTC m=+106.396796095" Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.887424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" event={"ID":"8146a15d-15b4-4340-bc59-aa7767cc7977","Type":"ContainerStarted","Data":"e5cabf022c587ac20a0ec5e7df00da5b80c7f5b24d2ee6ecb683a482297a7e17"} Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.887510 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" event={"ID":"8146a15d-15b4-4340-bc59-aa7767cc7977","Type":"ContainerStarted","Data":"38944b7c82071b7dd14c93d2b9cfbf636bee0ff8d9bd181a59f1ed9fcac0a38c"} Jan 21 14:35:44 crc kubenswrapper[4902]: I0121 14:35:44.902788 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7s2jq" podStartSLOduration=86.902772806 podStartE2EDuration="1m26.902772806s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.902524928 +0000 UTC m=+106.979357977" watchObservedRunningTime="2026-01-21 14:35:44.902772806 +0000 UTC m=+106.979605835" Jan 21 14:35:45 crc kubenswrapper[4902]: I0121 14:35:45.294368 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:45 crc kubenswrapper[4902]: I0121 14:35:45.294475 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:45 crc kubenswrapper[4902]: E0121 14:35:45.294518 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:45 crc kubenswrapper[4902]: E0121 14:35:45.294610 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:45 crc kubenswrapper[4902]: I0121 14:35:45.294469 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:45 crc kubenswrapper[4902]: E0121 14:35:45.294870 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:46 crc kubenswrapper[4902]: I0121 14:35:46.294038 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:46 crc kubenswrapper[4902]: E0121 14:35:46.294295 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:46 crc kubenswrapper[4902]: I0121 14:35:46.295035 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:35:46 crc kubenswrapper[4902]: E0121 14:35:46.295221 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8l7jc_openshift-ovn-kubernetes(0ec3a89a-830c-4274-8c1e-bd3c98120708)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" Jan 21 14:35:47 crc kubenswrapper[4902]: I0121 14:35:47.294596 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:47 crc kubenswrapper[4902]: I0121 14:35:47.294640 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:47 crc kubenswrapper[4902]: I0121 14:35:47.294615 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:47 crc kubenswrapper[4902]: E0121 14:35:47.294802 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:47 crc kubenswrapper[4902]: E0121 14:35:47.295081 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:47 crc kubenswrapper[4902]: E0121 14:35:47.295171 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:48 crc kubenswrapper[4902]: I0121 14:35:48.294369 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:48 crc kubenswrapper[4902]: E0121 14:35:48.295524 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:49 crc kubenswrapper[4902]: I0121 14:35:49.294315 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:49 crc kubenswrapper[4902]: E0121 14:35:49.294452 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:49 crc kubenswrapper[4902]: I0121 14:35:49.295282 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:49 crc kubenswrapper[4902]: E0121 14:35:49.295351 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:49 crc kubenswrapper[4902]: I0121 14:35:49.295502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:49 crc kubenswrapper[4902]: E0121 14:35:49.295576 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:50 crc kubenswrapper[4902]: I0121 14:35:50.294364 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:50 crc kubenswrapper[4902]: E0121 14:35:50.294587 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.294970 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.295013 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.295093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.295204 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.295309 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.295395 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.913116 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914267 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/0.log" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914331 4902 generic.go:334] "Generic (PLEG): container finished" podID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" exitCode=1 Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerDied","Data":"1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6"} Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914415 4902 scope.go:117] "RemoveContainer" containerID="801dc36e4156c5e9680271f65b69a4d6765995d605773de1d02dd312488e977e" Jan 21 14:35:51 crc kubenswrapper[4902]: I0121 14:35:51.914916 4902 scope.go:117] "RemoveContainer" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" Jan 21 14:35:51 crc kubenswrapper[4902]: E0121 14:35:51.915233 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mztd6_openshift-multus(037b55cf-cb9e-41ce-8b1e-3898f490a4aa)\"" pod="openshift-multus/multus-mztd6" podUID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" Jan 21 14:35:52 crc kubenswrapper[4902]: I0121 14:35:52.294842 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:52 crc kubenswrapper[4902]: E0121 14:35:52.295148 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:52 crc kubenswrapper[4902]: I0121 14:35:52.921283 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:35:53 crc kubenswrapper[4902]: I0121 14:35:53.294618 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:53 crc kubenswrapper[4902]: I0121 14:35:53.294814 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:53 crc kubenswrapper[4902]: E0121 14:35:53.295119 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:53 crc kubenswrapper[4902]: I0121 14:35:53.294812 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:53 crc kubenswrapper[4902]: E0121 14:35:53.295387 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:53 crc kubenswrapper[4902]: E0121 14:35:53.295271 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:54 crc kubenswrapper[4902]: I0121 14:35:54.294600 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:54 crc kubenswrapper[4902]: E0121 14:35:54.295108 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:55 crc kubenswrapper[4902]: I0121 14:35:55.293992 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:55 crc kubenswrapper[4902]: I0121 14:35:55.294120 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:55 crc kubenswrapper[4902]: I0121 14:35:55.294081 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:55 crc kubenswrapper[4902]: E0121 14:35:55.294744 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:55 crc kubenswrapper[4902]: E0121 14:35:55.294919 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:55 crc kubenswrapper[4902]: E0121 14:35:55.295095 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:56 crc kubenswrapper[4902]: I0121 14:35:56.294826 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:56 crc kubenswrapper[4902]: E0121 14:35:56.295083 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:57 crc kubenswrapper[4902]: I0121 14:35:57.294957 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:57 crc kubenswrapper[4902]: I0121 14:35:57.294966 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:57 crc kubenswrapper[4902]: I0121 14:35:57.295102 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:57 crc kubenswrapper[4902]: E0121 14:35:57.295870 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:35:57 crc kubenswrapper[4902]: E0121 14:35:57.296185 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:57 crc kubenswrapper[4902]: E0121 14:35:57.296311 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:58 crc kubenswrapper[4902]: I0121 14:35:58.294672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:35:58 crc kubenswrapper[4902]: E0121 14:35:58.295651 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:35:58 crc kubenswrapper[4902]: E0121 14:35:58.303994 4902 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 14:35:58 crc kubenswrapper[4902]: E0121 14:35:58.373593 4902 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:35:59 crc kubenswrapper[4902]: I0121 14:35:59.295276 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:35:59 crc kubenswrapper[4902]: E0121 14:35:59.295528 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:35:59 crc kubenswrapper[4902]: I0121 14:35:59.295855 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:35:59 crc kubenswrapper[4902]: E0121 14:35:59.295959 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:35:59 crc kubenswrapper[4902]: I0121 14:35:59.296249 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:35:59 crc kubenswrapper[4902]: E0121 14:35:59.296439 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.295118 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:00 crc kubenswrapper[4902]: E0121 14:36:00.295556 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.296901 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.962821 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.966483 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerStarted","Data":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:36:00 crc kubenswrapper[4902]: I0121 14:36:00.967222 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.001104 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podStartSLOduration=103.001075547 podStartE2EDuration="1m43.001075547s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:00.999891352 +0000 UTC m=+123.076724381" watchObservedRunningTime="2026-01-21 14:36:01.001075547 +0000 UTC m=+123.077908586" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.155100 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq588"] Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.155509 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.155592 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.294106 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.294198 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.294230 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.294405 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:01 crc kubenswrapper[4902]: I0121 14:36:01.294719 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:01 crc kubenswrapper[4902]: E0121 14:36:01.294844 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294504 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.295830 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294506 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.296112 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294561 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.296336 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:03 crc kubenswrapper[4902]: I0121 14:36:03.294592 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.296567 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:03 crc kubenswrapper[4902]: E0121 14:36:03.375815 4902 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.294426 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.294652 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.294935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.295028 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.295286 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:05 crc kubenswrapper[4902]: I0121 14:36:05.295320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.295556 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:05 crc kubenswrapper[4902]: E0121 14:36:05.295781 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.294450 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.295327 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.295194 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.295673 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.294900 4902 scope.go:117] "RemoveContainer" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.295408 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:07 crc kubenswrapper[4902]: I0121 14:36:07.295218 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.296132 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:07 crc kubenswrapper[4902]: E0121 14:36:07.296268 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:08 crc kubenswrapper[4902]: E0121 14:36:08.376405 4902 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:36:08 crc kubenswrapper[4902]: I0121 14:36:08.998955 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:36:08 crc kubenswrapper[4902]: I0121 14:36:08.999068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d"} Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294158 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294218 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:09 crc kubenswrapper[4902]: I0121 14:36:09.294387 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294429 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294491 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:09 crc kubenswrapper[4902]: E0121 14:36:09.294565 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294137 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294164 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294225 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:11 crc kubenswrapper[4902]: I0121 14:36:11.294335 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295418 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295569 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295616 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:11 crc kubenswrapper[4902]: E0121 14:36:11.295836 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294409 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294473 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294456 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.294635 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:36:13 crc kubenswrapper[4902]: I0121 14:36:13.294405 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.294850 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.295021 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:36:13 crc kubenswrapper[4902]: E0121 14:36:13.295181 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq588" podUID="05d94e6a-249a-484c-8895-085e81f1dfaa" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.848550 4902 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.944000 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.944533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.946076 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.946389 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.946615 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.947024 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.948089 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.948338 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.952190 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x9bhh"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.953798 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.967769 4902 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.967830 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.969426 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.970067 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.975107 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.975365 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.975730 4902 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.975790 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.975885 4902 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.975903 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.975975 4902 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.975993 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.976074 4902 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.976089 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.976141 4902 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.976154 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.976217 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.976594 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977238 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977408 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977420 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-57jmg"] Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.977575 4902 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.977599 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.977832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978169 4902 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978199 4902 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978203 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978246 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978338 4902 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978362 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.978453 4902 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.978486 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978579 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978693 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978848 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978961 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978993 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.979219 4902 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.979265 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979359 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.978972 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979641 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979713 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.979836 4902 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.979865 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.979964 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.980088 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.980259 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.980426 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.983257 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.984233 4902 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.984272 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: W0121 14:36:14.984336 4902 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 21 14:36:14 crc kubenswrapper[4902]: E0121 14:36:14.984350 4902 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984434 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984536 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984633 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.984954 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.985000 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.985498 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.988116 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k2wkm"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.988865 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.989003 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.989705 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.989951 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.990717 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wpch6"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.991533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.993059 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.993638 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.993774 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.994313 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.997889 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9hktz"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998128 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998570 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998742 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998751 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.998863 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999064 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2"] Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999122 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999271 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999445 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999530 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 14:36:14 crc kubenswrapper[4902]: I0121 14:36:14.999645 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:14.999868 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:14.999957 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000172 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000285 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000395 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000519 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000557 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000737 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000838 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.000968 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.001020 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.001995 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002331 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002517 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002675 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002716 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.002832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.003984 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004075 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004171 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004319 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004364 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004451 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004523 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004550 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004636 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004329 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004758 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004919 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.004994 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.005116 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.007528 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.008278 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.023928 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.025401 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-j7zvj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.026119 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.026516 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.027178 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.028564 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.039142 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.055553 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lrgnw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.055814 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.056113 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.056155 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.056977 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a537cbb-d314-4f04-94c8-625c03eb5a68-machine-approver-tls\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057077 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057132 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057151 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-serving-cert\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057238 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpczv\" (UniqueName: \"kubernetes.io/projected/3a537cbb-d314-4f04-94c8-625c03eb5a68-kube-api-access-dpczv\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057273 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057300 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057330 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057370 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057388 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057404 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057425 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bc8\" (UniqueName: \"kubernetes.io/projected/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-kube-api-access-g4bc8\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057464 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057501 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-auth-proxy-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.057759 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.058355 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.058928 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.059161 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.059527 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b5657"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.060067 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.063167 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.075296 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.063717 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.066095 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.075680 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2lccn"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.067082 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.075714 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.067390 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.068034 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.068068 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069609 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069614 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069674 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069774 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069866 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.069902 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.076283 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.076374 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.076592 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.081616 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.081884 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.082610 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q69sb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.083672 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.084347 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.084967 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.085320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.086096 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.086974 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.089075 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.089822 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.090911 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.091789 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.095392 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.097784 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.098915 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nshzl"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.100462 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.113799 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.114651 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.116276 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.116864 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.117479 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.117952 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.118559 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5q929"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.119974 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.120487 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.120712 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.123203 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lrz7m"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.124939 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.126349 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.126813 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.127417 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.127533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.128178 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.129676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.131581 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.134462 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j7zvj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.136948 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.138664 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k2wkm"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.141760 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9hktz"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.144010 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x9bhh"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.145446 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.146467 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-57jmg"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.146772 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.150416 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.155559 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rfwp8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.156822 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158092 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158464 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158519 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-serving-cert\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158706 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpczv\" (UniqueName: \"kubernetes.io/projected/3a537cbb-d314-4f04-94c8-625c03eb5a68-kube-api-access-dpczv\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.158950 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159063 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159347 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4bc8\" (UniqueName: \"kubernetes.io/projected/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-kube-api-access-g4bc8\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159437 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lrgnw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159436 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-auth-proxy-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a537cbb-d314-4f04-94c8-625c03eb5a68-machine-approver-tls\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159600 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.159976 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.160429 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.161414 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.161662 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.161722 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.162265 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wpch6"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.162574 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.163649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.164777 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.165053 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.165124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a537cbb-d314-4f04-94c8-625c03eb5a68-auth-proxy-config\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.165705 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.166731 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.169225 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.169935 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.170080 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.170809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3a537cbb-d314-4f04-94c8-625c03eb5a68-machine-approver-tls\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.171210 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nshzl"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.171700 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-serving-cert\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.173952 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.173989 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.179967 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.180023 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b5657"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.180642 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.181620 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.182601 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.183553 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.184640 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.186454 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.187147 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.192218 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w2qlx"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.196973 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rfwp8"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.197167 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.199645 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q69sb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.202596 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w2qlx"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.203745 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.205003 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5q929"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.206634 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.207126 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.207335 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.208616 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.209768 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lrz7m"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.210946 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.212200 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v4hs9"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.214753 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w8c9w"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.214928 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.215813 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v4hs9"] Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.215930 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.247205 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.267520 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.287365 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.293867 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.293905 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.294071 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.294393 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.314004 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.327256 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.347206 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.367824 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.387474 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.406790 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.427626 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.446705 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.467467 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.488035 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.507863 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.528367 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.548727 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.568231 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.589070 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.607942 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.627819 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.648036 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.666934 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.688504 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.708506 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.728022 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.748513 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.767811 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.788062 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.806902 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.832054 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.847901 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.866953 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.887744 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.907926 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.928343 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.947799 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.967969 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 14:36:15 crc kubenswrapper[4902]: I0121 14:36:15.987933 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.007351 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.027310 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.047452 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.067473 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.087967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.105454 4902 request.go:700] Waited for 1.012915213s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.108000 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.127277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.147881 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161461 4902 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161487 4902 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161558 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config podName:01ee90aa-9465-4cd2-97a0-ce735d557649 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.661531806 +0000 UTC m=+138.738364835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config") pod "route-controller-manager-6576b87f9c-xrcxf" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.161585 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert podName:01ee90aa-9465-4cd2-97a0-ce735d557649 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.661575737 +0000 UTC m=+138.738408776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert") pod "route-controller-manager-6576b87f9c-xrcxf" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.167148 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.188184 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.206789 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.228321 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.247930 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.268145 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.288618 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.308854 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.347317 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.367505 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371065 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371096 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371129 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-node-pullsecrets\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7zz\" (UniqueName: \"kubernetes.io/projected/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-kube-api-access-vw7zz\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371539 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371578 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371603 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89746c70-7e6b-4f62-acb0-25848752b0bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrx7w\" (UniqueName: \"kubernetes.io/projected/904ff956-5fbf-4e43-aede-3fa612c9bb70-kube-api-access-rrx7w\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-etcd-serving-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-serving-cert\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371793 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371817 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-audit-dir\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371844 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371862 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-audit\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-image-import-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371933 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-config\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.371985 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zp8n\" (UniqueName: \"kubernetes.io/projected/91a268d0-59c0-4e7f-8b78-260d14051e34-kube-api-access-6zp8n\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372087 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372185 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-etcd-client\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372202 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372220 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372301 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372401 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372426 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-trusted-ca\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-config\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372485 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwhzd\" (UniqueName: \"kubernetes.io/projected/64d60c19-a655-408a-99e4-becff3e27018-kube-api-access-bwhzd\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-service-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372544 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-policies\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-dir\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ac8539-334d-4811-8b3e-7a2df9e4c931-serving-cert\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372639 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372728 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372802 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372829 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f699h\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-kube-api-access-f699h\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372868 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372906 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-etcd-client\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.372961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q7c\" (UniqueName: \"kubernetes.io/projected/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-kube-api-access-64q7c\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373011 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-config\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-serving-cert\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373193 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-encryption-config\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-config\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-service-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373717 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89746c70-7e6b-4f62-acb0-25848752b0bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373854 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hd8\" (UniqueName: \"kubernetes.io/projected/8285f69a-516d-4bdd-9a14-72d966a0b208-kube-api-access-t2hd8\") pod \"downloads-7954f5f757-j7zvj\" (UID: \"8285f69a-516d-4bdd-9a14-72d966a0b208\") " pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.373993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcl92\" (UniqueName: \"kubernetes.io/projected/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-kube-api-access-xcl92\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374014 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-images\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374089 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/91a268d0-59c0-4e7f-8b78-260d14051e34-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374107 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-serving-cert\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-client\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-serving-cert\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374204 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.374250 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.874230995 +0000 UTC m=+138.951064244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374326 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxl2\" (UniqueName: \"kubernetes.io/projected/5765190c-206a-481f-a72e-4f119e8881bc-kube-api-access-ljxl2\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374343 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfknp\" (UniqueName: \"kubernetes.io/projected/50ac8539-334d-4811-8b3e-7a2df9e4c931-kube-api-access-tfknp\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374360 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-encryption-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.374413 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.386926 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.407202 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.427608 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.447189 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.467458 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.474960 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.475084 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.975062493 +0000 UTC m=+139.051895522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475213 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4449adc-13fa-40ee-a058-f42120e5cbee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475269 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/677296cf-109d-4fc1-b3db-c8312605a5fb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475376 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-config\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475408 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zp8n\" (UniqueName: \"kubernetes.io/projected/91a268d0-59c0-4e7f-8b78-260d14051e34-kube-api-access-6zp8n\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475433 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a94b1199-eac7-4e88-ad39-44936959740c-signing-cabundle\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64f0091d-255f-4e9a-a14c-33d240892e51-proxy-tls\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475502 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4frg\" (UniqueName: \"kubernetes.io/projected/53985f44-9907-48a1-8912-6163cecceba9-kube-api-access-w4frg\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475527 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475552 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-stats-auth\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475595 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475796 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475878 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475918 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475939 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.475957 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec3e08f-1312-4857-b152-cde8e51aad05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476034 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-trusted-ca\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-service-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-policies\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477155 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tnw\" (UniqueName: \"kubernetes.io/projected/a605a533-8d8c-47bc-a04c-0739f97482e6-kube-api-access-d8tnw\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476979 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-etcd-service-ca\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-policies\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.476306 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-config\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477236 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-metrics-certs\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477271 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-srv-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477305 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-cert\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477343 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f699h\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-kube-api-access-f699h\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477382 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7jw\" (UniqueName: \"kubernetes.io/projected/2c1970f7-f131-4594-b396-d33bb9776e33-kube-api-access-zn7jw\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477484 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64q7c\" (UniqueName: \"kubernetes.io/projected/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-kube-api-access-64q7c\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-etcd-client\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477770 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjp5\" (UniqueName: \"kubernetes.io/projected/ef463925-8c6c-4217-9bba-e15e1283c4c8-kube-api-access-hjjp5\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477799 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a14f9ae8-3c9b-4618-8255-a55408525925-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-config\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477861 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-serving-cert\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4449adc-13fa-40ee-a058-f42120e5cbee-config\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477916 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477951 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-encryption-config\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.477981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-config\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478017 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-service-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478064 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-socket-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478391 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14f9ae8-3c9b-4618-8255-a55408525925-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478493 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677296cf-109d-4fc1-b3db-c8312605a5fb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.478576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-images\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479059 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479078 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479281 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.479741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-config\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.479893 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:16.979879895 +0000 UTC m=+139.056712924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmkp4\" (UniqueName: \"kubernetes.io/projected/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-kube-api-access-gmkp4\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbn55\" (UniqueName: \"kubernetes.io/projected/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-kube-api-access-qbn55\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480091 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-service-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-trusted-ca\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.480637 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-config\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481340 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481424 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-images\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-serving-cert\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-registration-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481600 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljxl2\" (UniqueName: \"kubernetes.io/projected/5765190c-206a-481f-a72e-4f119e8881bc-kube-api-access-ljxl2\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-etcd-client\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481691 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-encryption-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481731 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-default-certificate\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481821 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6gd\" (UniqueName: \"kubernetes.io/projected/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-kube-api-access-dh6gd\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481845 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9467c15f-f3fe-4594-b97d-0838d43877d1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481876 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-srv-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.481946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdgj\" (UniqueName: \"kubernetes.io/projected/9467c15f-f3fe-4594-b97d-0838d43877d1-kube-api-access-bsdgj\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482088 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91a268d0-59c0-4e7f-8b78-260d14051e34-images\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482231 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78db9f9d-1963-42d2-9e52-da80ef710af8-serving-cert\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482346 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7zz\" (UniqueName: \"kubernetes.io/projected/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-kube-api-access-vw7zz\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482513 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482563 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89746c70-7e6b-4f62-acb0-25848752b0bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrx7w\" (UniqueName: \"kubernetes.io/projected/904ff956-5fbf-4e43-aede-3fa612c9bb70-kube-api-access-rrx7w\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482761 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2m5\" (UniqueName: \"kubernetes.io/projected/78db9f9d-1963-42d2-9e52-da80ef710af8-kube-api-access-zn2m5\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482852 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483165 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-serving-cert\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483259 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-mountpoint-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483447 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031f1783-31bd-4008-ace8-3ede7d0a86de-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483714 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53985f44-9907-48a1-8912-6163cecceba9-config-volume\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-audit\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.483994 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-image-import-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtcw\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-kube-api-access-2dtcw\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.482689 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4449adc-13fa-40ee-a058-f42120e5cbee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484379 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jr7b\" (UniqueName: \"kubernetes.io/projected/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-kube-api-access-4jr7b\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484464 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53985f44-9907-48a1-8912-6163cecceba9-metrics-tls\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484680 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn78g\" (UniqueName: \"kubernetes.io/projected/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-kube-api-access-cn78g\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485200 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485272 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-webhook-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5blcl\" (UniqueName: \"kubernetes.io/projected/64f0091d-255f-4e9a-a14c-33d240892e51-kube-api-access-5blcl\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485472 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-etcd-client\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-config\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485593 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-audit\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485614 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwhzd\" (UniqueName: \"kubernetes.io/projected/64d60c19-a655-408a-99e4-becff3e27018-kube-api-access-bwhzd\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485655 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43c52dc8-25a9-44d5-bea6-ecd091f55d54-metrics-tls\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677296cf-109d-4fc1-b3db-c8312605a5fb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485725 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-tmpfs\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485756 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-profile-collector-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485796 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-dir\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485828 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ac8539-334d-4811-8b3e-7a2df9e4c931-serving-cert\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485917 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.484393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89746c70-7e6b-4f62-acb0-25848752b0bf-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485953 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485991 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/904ff956-5fbf-4e43-aede-3fa612c9bb70-audit-dir\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.485995 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh7hx\" (UniqueName: \"kubernetes.io/projected/2ec3e08f-1312-4857-b152-cde8e51aad05-kube-api-access-jh7hx\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486061 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486092 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486120 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlkc\" (UniqueName: \"kubernetes.io/projected/031f1783-31bd-4008-ace8-3ede7d0a86de-kube-api-access-mqlkc\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486169 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486201 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486226 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd88z\" (UniqueName: \"kubernetes.io/projected/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-kube-api-access-rd88z\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486248 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486286 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89746c70-7e6b-4f62-acb0-25848752b0bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486309 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs4v\" (UniqueName: \"kubernetes.io/projected/a94b1199-eac7-4e88-ad39-44936959740c-kube-api-access-lfs4v\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hd8\" (UniqueName: \"kubernetes.io/projected/8285f69a-516d-4bdd-9a14-72d966a0b208-kube-api-access-t2hd8\") pod \"downloads-7954f5f757-j7zvj\" (UID: \"8285f69a-516d-4bdd-9a14-72d966a0b208\") " pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486355 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcl92\" (UniqueName: \"kubernetes.io/projected/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-kube-api-access-xcl92\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14f9ae8-3c9b-4618-8255-a55408525925-config\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/91a268d0-59c0-4e7f-8b78-260d14051e34-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486447 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-serving-cert\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-client\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a605a533-8d8c-47bc-a04c-0739f97482e6-proxy-tls\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486510 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486534 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfknp\" (UniqueName: \"kubernetes.io/projected/50ac8539-334d-4811-8b3e-7a2df9e4c931-kube-api-access-tfknp\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486560 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a94b1199-eac7-4e88-ad39-44936959740c-signing-key\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-certs\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486621 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486651 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-node-bootstrap-token\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486715 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a605a533-8d8c-47bc-a04c-0739f97482e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486794 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66wm\" (UniqueName: \"kubernetes.io/projected/92715363-5170-4018-8a70-eb8274f5ffe0-kube-api-access-z66wm\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486871 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-node-pullsecrets\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7dkh\" (UniqueName: \"kubernetes.io/projected/29cc0582-bf2f-4e0b-a351-2d933fdbd52f-kube-api-access-j7dkh\") pod \"migrator-59844c95c7-2n2xb\" (UID: \"29cc0582-bf2f-4e0b-a351-2d933fdbd52f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486947 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486974 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031f1783-31bd-4008-ace8-3ede7d0a86de-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.486998 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487017 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-plugins-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-csi-data-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487089 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487107 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-etcd-serving-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487150 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf7xr\" (UniqueName: \"kubernetes.io/projected/43c52dc8-25a9-44d5-bea6-ecd091f55d54-kube-api-access-vf7xr\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487210 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-audit-dir\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-service-ca-bundle\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-audit-dir\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5765190c-206a-481f-a72e-4f119e8881bc-config\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487561 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-etcd-serving-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.487873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64d60c19-a655-408a-99e4-becff3e27018-node-pullsecrets\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488327 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.488767 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.489414 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.489468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78db9f9d-1963-42d2-9e52-da80ef710af8-config\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.490126 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ac8539-334d-4811-8b3e-7a2df9e4c931-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.491332 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.491750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.491980 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493500 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/91a268d0-59c0-4e7f-8b78-260d14051e34-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-encryption-config\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-etcd-client\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5765190c-206a-481f-a72e-4f119e8881bc-serving-cert\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.493931 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-serving-cert\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64d60c19-a655-408a-99e4-becff3e27018-encryption-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494336 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494712 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494736 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.494897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-serving-cert\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.495258 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-serving-cert\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.495318 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.496156 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89746c70-7e6b-4f62-acb0-25848752b0bf-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.496922 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/904ff956-5fbf-4e43-aede-3fa612c9bb70-etcd-client\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.500890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50ac8539-334d-4811-8b3e-7a2df9e4c931-serving-cert\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.507230 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.527229 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.549378 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.567389 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.587535 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.590735 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.591073 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.091016141 +0000 UTC m=+139.167849190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591438 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677296cf-109d-4fc1-b3db-c8312605a5fb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-images\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591495 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmkp4\" (UniqueName: \"kubernetes.io/projected/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-kube-api-access-gmkp4\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591517 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbn55\" (UniqueName: \"kubernetes.io/projected/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-kube-api-access-qbn55\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591543 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-registration-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-default-certificate\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591678 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh6gd\" (UniqueName: \"kubernetes.io/projected/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-kube-api-access-dh6gd\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591706 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9467c15f-f3fe-4594-b97d-0838d43877d1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-srv-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591780 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsdgj\" (UniqueName: \"kubernetes.io/projected/9467c15f-f3fe-4594-b97d-0838d43877d1-kube-api-access-bsdgj\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591849 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78db9f9d-1963-42d2-9e52-da80ef710af8-serving-cert\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591898 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn2m5\" (UniqueName: \"kubernetes.io/projected/78db9f9d-1963-42d2-9e52-da80ef710af8-kube-api-access-zn2m5\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.591910 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.091899051 +0000 UTC m=+139.168732090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.591955 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-mountpoint-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031f1783-31bd-4008-ace8-3ede7d0a86de-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592037 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592083 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53985f44-9907-48a1-8912-6163cecceba9-config-volume\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtcw\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-kube-api-access-2dtcw\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4449adc-13fa-40ee-a058-f42120e5cbee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592186 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-registration-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jr7b\" (UniqueName: \"kubernetes.io/projected/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-kube-api-access-4jr7b\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592240 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-mountpoint-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592251 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53985f44-9907-48a1-8912-6163cecceba9-metrics-tls\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn78g\" (UniqueName: \"kubernetes.io/projected/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-kube-api-access-cn78g\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-webhook-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592566 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5blcl\" (UniqueName: \"kubernetes.io/projected/64f0091d-255f-4e9a-a14c-33d240892e51-kube-api-access-5blcl\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592626 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43c52dc8-25a9-44d5-bea6-ecd091f55d54-metrics-tls\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592748 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677296cf-109d-4fc1-b3db-c8312605a5fb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592796 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677296cf-109d-4fc1-b3db-c8312605a5fb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592798 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-tmpfs\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592854 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-profile-collector-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592887 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh7hx\" (UniqueName: \"kubernetes.io/projected/2ec3e08f-1312-4857-b152-cde8e51aad05-kube-api-access-jh7hx\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592940 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlkc\" (UniqueName: \"kubernetes.io/projected/031f1783-31bd-4008-ace8-3ede7d0a86de-kube-api-access-mqlkc\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.592975 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd88z\" (UniqueName: \"kubernetes.io/projected/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-kube-api-access-rd88z\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593009 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593033 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfs4v\" (UniqueName: \"kubernetes.io/projected/a94b1199-eac7-4e88-ad39-44936959740c-kube-api-access-lfs4v\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14f9ae8-3c9b-4618-8255-a55408525925-config\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a605a533-8d8c-47bc-a04c-0739f97482e6-proxy-tls\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593127 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a94b1199-eac7-4e88-ad39-44936959740c-signing-key\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-certs\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593160 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593177 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-node-bootstrap-token\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593205 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a605a533-8d8c-47bc-a04c-0739f97482e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593298 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593316 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z66wm\" (UniqueName: \"kubernetes.io/projected/92715363-5170-4018-8a70-eb8274f5ffe0-kube-api-access-z66wm\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593334 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7dkh\" (UniqueName: \"kubernetes.io/projected/29cc0582-bf2f-4e0b-a351-2d933fdbd52f-kube-api-access-j7dkh\") pod \"migrator-59844c95c7-2n2xb\" (UID: \"29cc0582-bf2f-4e0b-a351-2d933fdbd52f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031f1783-31bd-4008-ace8-3ede7d0a86de-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593412 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-plugins-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593455 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-csi-data-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593495 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf7xr\" (UniqueName: \"kubernetes.io/projected/43c52dc8-25a9-44d5-bea6-ecd091f55d54-kube-api-access-vf7xr\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593517 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-service-ca-bundle\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593600 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593653 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78db9f9d-1963-42d2-9e52-da80ef710af8-config\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/677296cf-109d-4fc1-b3db-c8312605a5fb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593719 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4449adc-13fa-40ee-a058-f42120e5cbee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a94b1199-eac7-4e88-ad39-44936959740c-signing-cabundle\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593765 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64f0091d-255f-4e9a-a14c-33d240892e51-proxy-tls\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593788 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4frg\" (UniqueName: \"kubernetes.io/projected/53985f44-9907-48a1-8912-6163cecceba9-kube-api-access-w4frg\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593827 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-stats-auth\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593842 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec3e08f-1312-4857-b152-cde8e51aad05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593892 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8tnw\" (UniqueName: \"kubernetes.io/projected/a605a533-8d8c-47bc-a04c-0739f97482e6-kube-api-access-d8tnw\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-metrics-certs\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593971 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-srv-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-cert\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594023 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn7jw\" (UniqueName: \"kubernetes.io/projected/2c1970f7-f131-4594-b396-d33bb9776e33-kube-api-access-zn7jw\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594072 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjp5\" (UniqueName: \"kubernetes.io/projected/ef463925-8c6c-4217-9bba-e15e1283c4c8-kube-api-access-hjjp5\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a14f9ae8-3c9b-4618-8255-a55408525925-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594141 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4449adc-13fa-40ee-a058-f42120e5cbee-config\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594175 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594197 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-socket-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594221 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14f9ae8-3c9b-4618-8255-a55408525925-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.594731 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-trusted-ca\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.595534 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-default-certificate\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.595748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.595780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-srv-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.593849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-tmpfs\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596199 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/031f1783-31bd-4008-ace8-3ede7d0a86de-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a14f9ae8-3c9b-4618-8255-a55408525925-config\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596590 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78db9f9d-1963-42d2-9e52-da80ef710af8-config\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/677296cf-109d-4fc1-b3db-c8312605a5fb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.596845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.597357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a605a533-8d8c-47bc-a04c-0739f97482e6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.599299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-profile-collector-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.599461 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43c52dc8-25a9-44d5-bea6-ecd091f55d54-metrics-tls\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.599600 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-csi-data-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600028 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9467c15f-f3fe-4594-b97d-0838d43877d1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600238 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031f1783-31bd-4008-ace8-3ede7d0a86de-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600311 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-plugins-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.600392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a14f9ae8-3c9b-4618-8255-a55408525925-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.601322 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78db9f9d-1963-42d2-9e52-da80ef710af8-serving-cert\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.601401 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-service-ca-bundle\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.601882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602395 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602432 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4449adc-13fa-40ee-a058-f42120e5cbee-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602510 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c1970f7-f131-4594-b396-d33bb9776e33-socket-dir\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.602530 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603126 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/92715363-5170-4018-8a70-eb8274f5ffe0-srv-cert\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603684 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4449adc-13fa-40ee-a058-f42120e5cbee-config\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-metrics-certs\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.603927 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-metrics-tls\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.604237 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec3e08f-1312-4857-b152-cde8e51aad05-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.604675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.605289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ef463925-8c6c-4217-9bba-e15e1283c4c8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.605516 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-stats-auth\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.606719 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a605a533-8d8c-47bc-a04c-0739f97482e6-proxy-tls\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.607172 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.612480 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a94b1199-eac7-4e88-ad39-44936959740c-signing-key\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.628300 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.647444 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.651017 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a94b1199-eac7-4e88-ad39-44936959740c-signing-cabundle\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.667872 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.687114 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.695263 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.695444 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.19541986 +0000 UTC m=+139.272252889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.695558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.695665 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.695891 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.195883896 +0000 UTC m=+139.272716925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.696012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.696275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-webhook-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.696994 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-apiservice-cert\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.707281 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.713024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/64f0091d-255f-4e9a-a14c-33d240892e51-images\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.727030 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.747511 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.754155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64f0091d-255f-4e9a-a14c-33d240892e51-proxy-tls\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.767748 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.787194 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.793747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-cert\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.796641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.796816 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.296794816 +0000 UTC m=+139.373627855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.797615 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.797943 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.297933974 +0000 UTC m=+139.374767003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.808556 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.828193 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.884423 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4bc8\" (UniqueName: \"kubernetes.io/projected/c690c8a8-1bd9-45ff-ba62-93cb7f1ce890-kube-api-access-g4bc8\") pod \"openshift-config-operator-7777fb866f-wpch6\" (UID: \"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.898957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.899420 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.399386613 +0000 UTC m=+139.476219812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.899953 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:16 crc kubenswrapper[4902]: E0121 14:36:16.900442 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.400423498 +0000 UTC m=+139.477256567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.906583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpczv\" (UniqueName: \"kubernetes.io/projected/3a537cbb-d314-4f04-94c8-625c03eb5a68-kube-api-access-dpczv\") pod \"machine-approver-56656f9798-p4tq6\" (UID: \"3a537cbb-d314-4f04-94c8-625c03eb5a68\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.926248 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"console-f9d7485db-9nw4v\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.927256 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.935518 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53985f44-9907-48a1-8912-6163cecceba9-metrics-tls\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.938677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.947447 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.956763 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.967291 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.973111 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53985f44-9907-48a1-8912-6163cecceba9-config-volume\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:16 crc kubenswrapper[4902]: I0121 14:36:16.988375 4902 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.000961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.001157 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.501122072 +0000 UTC m=+139.577955101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.001677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.002064 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.502035652 +0000 UTC m=+139.578868681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.007557 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.027603 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.047790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.067790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.080721 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-certs\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.089979 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.100187 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-node-bootstrap-token\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.104818 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.106587 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.606527854 +0000 UTC m=+139.683361063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.106773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.107517 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.607506637 +0000 UTC m=+139.684339676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.125292 4902 request.go:700] Waited for 1.831098344s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.127414 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.147203 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.167421 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.187454 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.204939 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wpch6"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.210476 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.210730 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.211476 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.71146146 +0000 UTC m=+139.788294489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.226882 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.247269 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.266668 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.271306 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-config\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.287831 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.307120 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.312672 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.313273 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.81324283 +0000 UTC m=+139.890075899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.315254 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.327159 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.336347 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64d60c19-a655-408a-99e4-becff3e27018-image-import-ca\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.349906 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.357245 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.367147 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.368507 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: W0121 14:36:17.373865 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a537cbb_d314_4f04_94c8_625c03eb5a68.slice/crio-051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9 WatchSource:0}: Error finding container 051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9: Status 404 returned error can't find the container with id 051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9 Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.383068 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.387994 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:36:17 crc kubenswrapper[4902]: W0121 14:36:17.389837 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853f0809_8828_4976_9b04_dd078ab64ced.slice/crio-11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53 WatchSource:0}: Error finding container 11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53: Status 404 returned error can't find the container with id 11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53 Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.391242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.407113 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.413979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.414297 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.914260584 +0000 UTC m=+139.991093743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.414884 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.415323 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.91531008 +0000 UTC m=+139.992143109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.427263 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.433486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.447866 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.467227 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.474230 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.480202 4902 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.480350 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles podName:c7158f8a-be32-4700-857f-faf9157f99f5 nodeName:}" failed. No retries permitted until 2026-01-21 14:36:17.980330577 +0000 UTC m=+140.057163616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles") pod "controller-manager-879f6c89f-tn2zp" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.486578 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.499511 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.506763 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.516113 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.517195 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.017149172 +0000 UTC m=+140.093982371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.517855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"route-controller-manager-6576b87f9c-xrcxf\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.533421 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.560888 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.581098 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zp8n\" (UniqueName: \"kubernetes.io/projected/91a268d0-59c0-4e7f-8b78-260d14051e34-kube-api-access-6zp8n\") pod \"machine-api-operator-5694c8668f-57jmg\" (UID: \"91a268d0-59c0-4e7f-8b78-260d14051e34\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.601875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f699h\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-kube-api-access-f699h\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.618013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.618812 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.118777626 +0000 UTC m=+140.195610655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.621098 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64q7c\" (UniqueName: \"kubernetes.io/projected/71696f1d-02bf-4fc5-a7f5-8dc351b3bf86-kube-api-access-64q7c\") pod \"console-operator-58897d9998-9hktz\" (UID: \"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86\") " pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.641396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.661158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljxl2\" (UniqueName: \"kubernetes.io/projected/5765190c-206a-481f-a72e-4f119e8881bc-kube-api-access-ljxl2\") pod \"etcd-operator-b45778765-lrgnw\" (UID: \"5765190c-206a-481f-a72e-4f119e8881bc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.682010 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.707269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7zz\" (UniqueName: \"kubernetes.io/projected/1eabb5ac-ae9e-4853-a2ec-2d821a4883f8-kube-api-access-vw7zz\") pod \"cluster-samples-operator-665b6dd947-jt8f8\" (UID: \"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.719301 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.719501 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.219469529 +0000 UTC m=+140.296302568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.719852 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.720209 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.220196274 +0000 UTC m=+140.297029303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.722210 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrx7w\" (UniqueName: \"kubernetes.io/projected/904ff956-5fbf-4e43-aede-3fa612c9bb70-kube-api-access-rrx7w\") pod \"apiserver-7bbb656c7d-wgsqz\" (UID: \"904ff956-5fbf-4e43-aede-3fa612c9bb70\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.732191 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.739976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwhzd\" (UniqueName: \"kubernetes.io/projected/64d60c19-a655-408a-99e4-becff3e27018-kube-api-access-bwhzd\") pod \"apiserver-76f77b778f-x9bhh\" (UID: \"64d60c19-a655-408a-99e4-becff3e27018\") " pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.760805 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89746c70-7e6b-4f62-acb0-25848752b0bf-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-scxh2\" (UID: \"89746c70-7e6b-4f62-acb0-25848752b0bf\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.765349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.780984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"oauth-openshift-558db77b4-n2xzb\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.784940 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.802464 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.821609 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.822582 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.322554683 +0000 UTC m=+140.399387712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.829688 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcl92\" (UniqueName: \"kubernetes.io/projected/a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8-kube-api-access-xcl92\") pod \"openshift-apiserver-operator-796bbdcf4f-gvxn5\" (UID: \"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.832502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.843297 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfknp\" (UniqueName: \"kubernetes.io/projected/50ac8539-334d-4811-8b3e-7a2df9e4c931-kube-api-access-tfknp\") pod \"authentication-operator-69f744f599-k2wkm\" (UID: \"50ac8539-334d-4811-8b3e-7a2df9e4c931\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.847397 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.867770 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmkp4\" (UniqueName: \"kubernetes.io/projected/4c2958e3-5395-4efd-8b8f-f3e70fd9fcea-kube-api-access-gmkp4\") pod \"multus-admission-controller-857f4d67dd-q69sb\" (UID: \"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.871437 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.885386 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"marketplace-operator-79b997595-xm5cd\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.891646 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.909828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbn55\" (UniqueName: \"kubernetes.io/projected/95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0-kube-api-access-qbn55\") pod \"ingress-canary-rfwp8\" (UID: \"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0\") " pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.915308 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.925307 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:17 crc kubenswrapper[4902]: E0121 14:36:17.925875 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.425860284 +0000 UTC m=+140.502693313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.927332 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh6gd\" (UniqueName: \"kubernetes.io/projected/eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f-kube-api-access-dh6gd\") pod \"packageserver-d55dfcdfc-tgt87\" (UID: \"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.943421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn2m5\" (UniqueName: \"kubernetes.io/projected/78db9f9d-1963-42d2-9e52-da80ef710af8-kube-api-access-zn2m5\") pod \"service-ca-operator-777779d784-nshzl\" (UID: \"78db9f9d-1963-42d2-9e52-da80ef710af8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.944210 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.966729 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.978797 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x9bhh"] Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.979202 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" Jan 21 14:36:17 crc kubenswrapper[4902]: I0121 14:36:17.988014 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtcw\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-kube-api-access-2dtcw\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.003167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jr7b\" (UniqueName: \"kubernetes.io/projected/52fccef5-5bbc-4411-9eb0-fcca74e3c3f1-kube-api-access-4jr7b\") pod \"router-default-5444994796-2lccn\" (UID: \"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1\") " pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.013866 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.019242 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.027938 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.028289 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.528255435 +0000 UTC m=+140.605088474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.029479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.029629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.030455 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.530440249 +0000 UTC m=+140.607273468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.031543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4449adc-13fa-40ee-a058-f42120e5cbee-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wchr8\" (UID: \"c4449adc-13fa-40ee-a058-f42120e5cbee\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.031679 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tn2zp\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.036117 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-57jmg"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.046831 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn78g\" (UniqueName: \"kubernetes.io/projected/0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a-kube-api-access-cn78g\") pod \"kube-storage-version-migrator-operator-b67b599dd-79b2r\" (UID: \"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.058563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.060881 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.062834 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5blcl\" (UniqueName: \"kubernetes.io/projected/64f0091d-255f-4e9a-a14c-33d240892e51-kube-api-access-5blcl\") pod \"machine-config-operator-74547568cd-qldcg\" (UID: \"64f0091d-255f-4e9a-a14c-33d240892e51\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.067345 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.069201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" event={"ID":"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890","Type":"ContainerStarted","Data":"6a818a5ecd195b0ebaac59b22f2bfa936d40bae0bdb2704bfc7ef05169b47826"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.070203 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerStarted","Data":"1d6f20bc21db99ffc3b51f783b09029cf7dec2c4ed9b3a8a2f63bf561b414a3a"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.071298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" event={"ID":"3a537cbb-d314-4f04-94c8-625c03eb5a68","Type":"ContainerStarted","Data":"051dcc6bc0cb5ed3e7bc82b96a35dfc4490f6f6a020b93d8511dea11f6ca28c9"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.072241 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerStarted","Data":"11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53"} Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.076706 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rfwp8" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.093574 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9hktz"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.103754 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlkc\" (UniqueName: \"kubernetes.io/projected/031f1783-31bd-4008-ace8-3ede7d0a86de-kube-api-access-mqlkc\") pod \"openshift-controller-manager-operator-756b6f6bc6-895km\" (UID: \"031f1783-31bd-4008-ace8-3ede7d0a86de\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.112373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh7hx\" (UniqueName: \"kubernetes.io/projected/2ec3e08f-1312-4857-b152-cde8e51aad05-kube-api-access-jh7hx\") pod \"package-server-manager-789f6589d5-67gqb\" (UID: \"2ec3e08f-1312-4857-b152-cde8e51aad05\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.113262 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hd8\" (UniqueName: \"kubernetes.io/projected/8285f69a-516d-4bdd-9a14-72d966a0b208-kube-api-access-t2hd8\") pod \"downloads-7954f5f757-j7zvj\" (UID: \"8285f69a-516d-4bdd-9a14-72d966a0b208\") " pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.123261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd88z\" (UniqueName: \"kubernetes.io/projected/5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a-kube-api-access-rd88z\") pod \"machine-config-server-w8c9w\" (UID: \"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a\") " pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.123979 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91a268d0_59c0_4e7f_8b78_260d14051e34.slice/crio-315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529 WatchSource:0}: Error finding container 315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529: Status 404 returned error can't find the container with id 315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.127023 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.128401 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.138080 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.138287 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.638263823 +0000 UTC m=+140.715096852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.138323 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.138841 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.638831932 +0000 UTC m=+140.715664961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.140664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsdgj\" (UniqueName: \"kubernetes.io/projected/9467c15f-f3fe-4594-b97d-0838d43877d1-kube-api-access-bsdgj\") pod \"control-plane-machine-set-operator-78cbb6b69f-qm6gk\" (UID: \"9467c15f-f3fe-4594-b97d-0838d43877d1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.143911 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"collect-profiles-29483430-xwzfw\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.145272 4902 request.go:700] Waited for 1.549484367s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.146211 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c16a673_e56a_49ff_ac34_6910e02214a6.slice/crio-7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60 WatchSource:0}: Error finding container 7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60: Status 404 returned error can't find the container with id 7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.170496 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfs4v\" (UniqueName: \"kubernetes.io/projected/a94b1199-eac7-4e88-ad39-44936959740c-kube-api-access-lfs4v\") pod \"service-ca-9c57cc56f-lrz7m\" (UID: \"a94b1199-eac7-4e88-ad39-44936959740c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.184668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7faf6fc-58fe-4457-bb7c-510fce0b60a7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lpsnj\" (UID: \"d7faf6fc-58fe-4457-bb7c-510fce0b60a7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.205060 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn7jw\" (UniqueName: \"kubernetes.io/projected/2c1970f7-f131-4594-b396-d33bb9776e33-kube-api-access-zn7jw\") pod \"csi-hostpathplugin-v4hs9\" (UID: \"2c1970f7-f131-4594-b396-d33bb9776e33\") " pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.205325 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.226034 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8tnw\" (UniqueName: \"kubernetes.io/projected/a605a533-8d8c-47bc-a04c-0739f97482e6-kube-api-access-d8tnw\") pod \"machine-config-controller-84d6567774-5q929\" (UID: \"a605a533-8d8c-47bc-a04c-0739f97482e6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.230740 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.242503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.242877 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.742851148 +0000 UTC m=+140.819684177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.243096 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.243575 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.743563072 +0000 UTC m=+140.820396111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.243643 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/677296cf-109d-4fc1-b3db-c8312605a5fb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8wfd7\" (UID: \"677296cf-109d-4fc1-b3db-c8312605a5fb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.244004 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.252515 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.257976 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.269617 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4frg\" (UniqueName: \"kubernetes.io/projected/53985f44-9907-48a1-8912-6163cecceba9-kube-api-access-w4frg\") pod \"dns-default-w2qlx\" (UID: \"53985f44-9907-48a1-8912-6163cecceba9\") " pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.282141 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.282798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf7xr\" (UniqueName: \"kubernetes.io/projected/43c52dc8-25a9-44d5-bea6-ecd091f55d54-kube-api-access-vf7xr\") pod \"dns-operator-744455d44c-b5657\" (UID: \"43c52dc8-25a9-44d5-bea6-ecd091f55d54\") " pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.286496 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.303591 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.317964 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjp5\" (UniqueName: \"kubernetes.io/projected/ef463925-8c6c-4217-9bba-e15e1283c4c8-kube-api-access-hjjp5\") pod \"olm-operator-6b444d44fb-zzmhd\" (UID: \"ef463925-8c6c-4217-9bba-e15e1283c4c8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.318334 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.329334 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.330457 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.332755 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a14f9ae8-3c9b-4618-8255-a55408525925-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gdzsm\" (UID: \"a14f9ae8-3c9b-4618-8255-a55408525925\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.338761 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.343965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.344537 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.844521074 +0000 UTC m=+140.921354103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.344543 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.355623 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.358387 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z66wm\" (UniqueName: \"kubernetes.io/projected/92715363-5170-4018-8a70-eb8274f5ffe0-kube-api-access-z66wm\") pod \"catalog-operator-68c6474976-hf96t\" (UID: \"92715363-5170-4018-8a70-eb8274f5ffe0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.365890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7dkh\" (UniqueName: \"kubernetes.io/projected/29cc0582-bf2f-4e0b-a351-2d933fdbd52f-kube-api-access-j7dkh\") pod \"migrator-59844c95c7-2n2xb\" (UID: \"29cc0582-bf2f-4e0b-a351-2d933fdbd52f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.383732 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.400876 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.411538 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w8c9w" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.415327 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.445539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.446226 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:18.946212291 +0000 UTC m=+141.023045330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.501908 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.534380 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179de16d_c6d0_4cda_8d1f_8c2396301175.slice/crio-55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe WatchSource:0}: Error finding container 55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe: Status 404 returned error can't find the container with id 55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.541279 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.542482 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.547480 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.547828 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.047811474 +0000 UTC m=+141.124644503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.573757 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.593677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.609662 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.614576 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.637530 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lrgnw"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.651126 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.651517 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.151504549 +0000 UTC m=+141.228337578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.656066 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.715451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rfwp8"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.718828 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q69sb"] Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.739064 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff3ea2d_0de6_4bad_81e6_f3cac0c4d48f.slice/crio-a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4 WatchSource:0}: Error finding container a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4: Status 404 returned error can't find the container with id a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.749397 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k2wkm"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.752423 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.752859 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.252841894 +0000 UTC m=+141.329674913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.805803 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nshzl"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.810101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg"] Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.853915 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.854246 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.3542329 +0000 UTC m=+141.431065929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:18 crc kubenswrapper[4902]: W0121 14:36:18.934300 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e79e6b_37ae_4e8d_9f95_65e8a8ae49b0.slice/crio-ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844 WatchSource:0}: Error finding container ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844: Status 404 returned error can't find the container with id ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844 Jan 21 14:36:18 crc kubenswrapper[4902]: I0121 14:36:18.954695 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:18 crc kubenswrapper[4902]: E0121 14:36:18.955024 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.455008156 +0000 UTC m=+141.531841185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.055687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.056077 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.556065282 +0000 UTC m=+141.632898311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.083085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerStarted","Data":"3b5c9cc51e92d048ecd66327c41f5275b88c5ce220a0a780aad28268d57e2dac"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.083862 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w8c9w" event={"ID":"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a","Type":"ContainerStarted","Data":"68a4542899a519107a49bb37222c446ee45e60c23805946a9d89c67dbc26ea92"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.084908 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lccn" event={"ID":"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1","Type":"ContainerStarted","Data":"04667acda1446fd6a37275303b8fbaa4d36de6e8eca36ef82b6f08e047fc408a"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.086640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerStarted","Data":"7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.087898 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rfwp8" event={"ID":"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0","Type":"ContainerStarted","Data":"ccd123fd6ec11b6210e75d00c168e4ead7f7a827673c8f2d9be66d639ada2844"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.089110 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" event={"ID":"904ff956-5fbf-4e43-aede-3fa612c9bb70","Type":"ContainerStarted","Data":"49e6e600cb34972f6f70cce4ff6a907b9ecc3aa47fdba7eda5ff2372dea787ca"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.090703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" event={"ID":"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8","Type":"ContainerStarted","Data":"16dbfe7d3d658bb09c023f421c923ac10d5b14c33945b0e67a500dd1c3ce5395"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.093337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" event={"ID":"91a268d0-59c0-4e7f-8b78-260d14051e34","Type":"ContainerStarted","Data":"b60e5d0e06ea28ac604e7f129a54b7e66bbf9b95b0af22ce7f62af7abe20d1d5"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.093365 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" event={"ID":"91a268d0-59c0-4e7f-8b78-260d14051e34","Type":"ContainerStarted","Data":"315499d391964a9a8af5168675d61a0c2395be7e074ca7a7e847750e9e115529"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.101615 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerStarted","Data":"fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.103982 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" event={"ID":"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea","Type":"ContainerStarted","Data":"dea58454a77ab195fc7990a7797560df7b651eb42b7155f0958e67090ef3cd08"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.116371 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerStarted","Data":"6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.117407 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.135072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" event={"ID":"3a537cbb-d314-4f04-94c8-625c03eb5a68","Type":"ContainerStarted","Data":"a2021115a06b9247806e6ca8e73e0df80cc17a0ead88a1a22bd38b2a7465b773"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.135027 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.135163 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.136698 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" event={"ID":"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f","Type":"ContainerStarted","Data":"a92c48680c9f7a7f33ca1abfb213e98051895aebd64b396770168e61709627c4"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.142304 4902 generic.go:334] "Generic (PLEG): container finished" podID="c690c8a8-1bd9-45ff-ba62-93cb7f1ce890" containerID="2070ddad1c2f5e568896413d3c2579ee26a5f2ee71f94c673a6b2981ac178e55" exitCode=0 Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.142597 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" event={"ID":"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890","Type":"ContainerDied","Data":"2070ddad1c2f5e568896413d3c2579ee26a5f2ee71f94c673a6b2981ac178e55"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.145692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" event={"ID":"78db9f9d-1963-42d2-9e52-da80ef710af8","Type":"ContainerStarted","Data":"39057a8537a0d733eba895908d7cba8993302422bc45edb3e0f400f26fe34666"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.156184 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.156584 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.656569398 +0000 UTC m=+141.733402427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.173405 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" event={"ID":"64f0091d-255f-4e9a-a14c-33d240892e51","Type":"ContainerStarted","Data":"7830105e01a70233d53b29dd6ad721cea91d7d966cdcf81ab1149690397f310d"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.175769 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" event={"ID":"5765190c-206a-481f-a72e-4f119e8881bc","Type":"ContainerStarted","Data":"af9a4c5417f31d495b933557f861be71016754127f329cd492bd254914a008a6"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.178029 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" event={"ID":"50ac8539-334d-4811-8b3e-7a2df9e4c931","Type":"ContainerStarted","Data":"96901f7c8e0e1a0b90398b171ed2b4422f8be867bf4ce25bf476460739f0265c"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.184776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" event={"ID":"89746c70-7e6b-4f62-acb0-25848752b0bf","Type":"ContainerStarted","Data":"6b71b6ab643b50a2a8fb6aff5b6c6d934f2132558e0895bc0f3370f5a4c9fe6c"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.197544 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerStarted","Data":"55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.208463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9hktz" event={"ID":"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86","Type":"ContainerStarted","Data":"025c731fc711aaa053663870e6d80837e939266d5959b9e0ce0d30d685b6a8b7"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.208499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9hktz" event={"ID":"71696f1d-02bf-4fc5-a7f5-8dc351b3bf86","Type":"ContainerStarted","Data":"bd4b4c2546706399440225b2783ab05a0e42663479be86605c3e684a1ad16a84"} Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.209171 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.210709 4902 patch_prober.go:28] interesting pod/console-operator-58897d9998-9hktz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.210760 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9hktz" podUID="71696f1d-02bf-4fc5-a7f5-8dc351b3bf86" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.243508 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj"] Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.259184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.261734 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.761721192 +0000 UTC m=+141.838554221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.362466 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.362778 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.862743676 +0000 UTC m=+141.939576705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.363573 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.364526 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.864509726 +0000 UTC m=+141.941342755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.403589 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w2qlx"] Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.422103 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j7zvj"] Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.465179 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.465428 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.965392225 +0000 UTC m=+142.042225264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.465539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.465968 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:19.965956915 +0000 UTC m=+142.042789954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.566473 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.566822 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.066806393 +0000 UTC m=+142.143639422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.631029 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod904ff956_5fbf_4e43_aede_3fa612c9bb70.slice/crio-conmon-ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d60c19_a655_408a_99e4_becff3e27018.slice/crio-conmon-a40291ad90a37546c69056c86f5fd6bb86ff3a4c5ea686b6f4b0bec92e3cd415.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod904ff956_5fbf_4e43_aede_3fa612c9bb70.slice/crio-ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.669098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.669894 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.169848166 +0000 UTC m=+142.246681195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.770943 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.771383 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.271364495 +0000 UTC m=+142.348197524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.874955 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.875571 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.375545656 +0000 UTC m=+142.452378685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.983515 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.983720 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.483690471 +0000 UTC m=+142.560523500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:19 crc kubenswrapper[4902]: I0121 14:36:19.993332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:19 crc kubenswrapper[4902]: E0121 14:36:19.995155 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.495130248 +0000 UTC m=+142.571963277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.030930 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.043559 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lrz7m"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.052833 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v4hs9"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.066154 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.066572 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9nw4v" podStartSLOduration=122.066551352 podStartE2EDuration="2m2.066551352s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.064894276 +0000 UTC m=+142.141727315" watchObservedRunningTime="2026-01-21 14:36:20.066551352 +0000 UTC m=+142.143384381" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.099711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.100387 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.600366835 +0000 UTC m=+142.677199874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.100932 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9hktz" podStartSLOduration=122.100914063 podStartE2EDuration="2m2.100914063s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.099413752 +0000 UTC m=+142.176246791" watchObservedRunningTime="2026-01-21 14:36:20.100914063 +0000 UTC m=+142.177747092" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.182025 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podStartSLOduration=122.182002914 podStartE2EDuration="2m2.182002914s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.179012213 +0000 UTC m=+142.255845242" watchObservedRunningTime="2026-01-21 14:36:20.182002914 +0000 UTC m=+142.258835943" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.201100 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.201431 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.70141845 +0000 UTC m=+142.778251479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.214227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" event={"ID":"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8","Type":"ContainerStarted","Data":"f3e3c696d580ba36d66d8f54bae312874665528fcaca3bc905925c867f9e2fa8"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.215321 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2lccn" event={"ID":"52fccef5-5bbc-4411-9eb0-fcca74e3c3f1","Type":"ContainerStarted","Data":"46d32293e9a1ebe95002d8fc44dedd1b77910021f7c303a17376c725c8c4c09f"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.216986 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerStarted","Data":"31b5818a193a42b1200764cd8a3a2ec82450c46b99cee82fd307ec9a84582b72"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.218493 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j7zvj" event={"ID":"8285f69a-516d-4bdd-9a14-72d966a0b208","Type":"ContainerStarted","Data":"f12309a0d6ca0f497ebc178cfaaa2b142c59ceecdbea722a81d5a55f1dfbbbdf"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.219749 4902 generic.go:334] "Generic (PLEG): container finished" podID="904ff956-5fbf-4e43-aede-3fa612c9bb70" containerID="ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014" exitCode=0 Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.219831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" event={"ID":"904ff956-5fbf-4e43-aede-3fa612c9bb70","Type":"ContainerDied","Data":"ab669bf48465c2d82c87c849c36300ffcf45a563f58cfbdf46cb032576ff4014"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.221723 4902 generic.go:334] "Generic (PLEG): container finished" podID="64d60c19-a655-408a-99e4-becff3e27018" containerID="a40291ad90a37546c69056c86f5fd6bb86ff3a4c5ea686b6f4b0bec92e3cd415" exitCode=0 Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.221801 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerDied","Data":"a40291ad90a37546c69056c86f5fd6bb86ff3a4c5ea686b6f4b0bec92e3cd415"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.223724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerStarted","Data":"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.223997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.224857 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" event={"ID":"3a537cbb-d314-4f04-94c8-625c03eb5a68","Type":"ContainerStarted","Data":"d12d5adaf4cf0a0de2807d4df1984f2a010fb9f9818990386a5c00cca0f58a26"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.226237 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" event={"ID":"ef463925-8c6c-4217-9bba-e15e1283c4c8","Type":"ContainerStarted","Data":"854880c9532f4e83d83dca8a7d08151734f438f7e3dff43a0dca0a285d133256"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.226987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"609ac5d4fbccd4f8093eadf963d6f2a6133aa02fadebe3c6d80a03852e44360e"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.230168 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" event={"ID":"d7faf6fc-58fe-4457-bb7c-510fce0b60a7","Type":"ContainerStarted","Data":"25a6bde7b5ab658b0501b574c3d07bae218c912f2fa840c9a069625655bdee6a"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.230857 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w2qlx" event={"ID":"53985f44-9907-48a1-8912-6163cecceba9","Type":"ContainerStarted","Data":"78756f4d434114901247c17bc97805dca0ac35ef01e7b1ced811639a49abfc08"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.232183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" event={"ID":"89746c70-7e6b-4f62-acb0-25848752b0bf","Type":"ContainerStarted","Data":"cc02839a84c71e426caecaa1a34091b036fa383659a5d16dc87775660009b2f1"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.233987 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2lccn" podStartSLOduration=122.23397285 podStartE2EDuration="2m2.23397285s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.233512645 +0000 UTC m=+142.310345674" watchObservedRunningTime="2026-01-21 14:36:20.23397285 +0000 UTC m=+142.310805879" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.234538 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" event={"ID":"a94b1199-eac7-4e88-ad39-44936959740c","Type":"ContainerStarted","Data":"481ed30831571fe7fef815e1e3f5d943baba998445b57b31d3dba142f2a78f09"} Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.242246 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.258818 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.262813 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:20 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:20 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:20 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.263178 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.300800 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" podStartSLOduration=122.300779678 podStartE2EDuration="2m2.300779678s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.260205687 +0000 UTC m=+142.337038716" watchObservedRunningTime="2026-01-21 14:36:20.300779678 +0000 UTC m=+142.377612707" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.303105 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.304906 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.804883897 +0000 UTC m=+142.881716936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.336323 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p4tq6" podStartSLOduration=122.336303619 podStartE2EDuration="2m2.336303619s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.33397285 +0000 UTC m=+142.410805879" watchObservedRunningTime="2026-01-21 14:36:20.336303619 +0000 UTC m=+142.413136648" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.405714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.407750 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:20.907728662 +0000 UTC m=+142.984561701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.459205 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-scxh2" podStartSLOduration=122.459179801 podStartE2EDuration="2m2.459179801s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:20.456352526 +0000 UTC m=+142.533185565" watchObservedRunningTime="2026-01-21 14:36:20.459179801 +0000 UTC m=+142.536012830" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.507387 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.507619 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.007591688 +0000 UTC m=+143.084424717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.507886 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.508277 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.00826362 +0000 UTC m=+143.085096649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.609586 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.609774 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.10974176 +0000 UTC m=+143.186574859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.610130 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.610473 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.110460764 +0000 UTC m=+143.187293793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.711021 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.711278 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.21124557 +0000 UTC m=+143.288078599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.711438 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.711813 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.211754128 +0000 UTC m=+143.288587157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.734879 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-b5657"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.734929 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.739213 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.739283 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.741209 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.744262 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.745960 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.749746 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.812626 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.812794 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.312767181 +0000 UTC m=+143.389600210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.812961 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.813321 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.31330715 +0000 UTC m=+143.390140179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.903570 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5q929"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.905745 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.907602 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.909398 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7"] Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.913465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.913649 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.41361517 +0000 UTC m=+143.490448219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.914404 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:20 crc kubenswrapper[4902]: E0121 14:36:20.914763 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.414751658 +0000 UTC m=+143.491584687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:20 crc kubenswrapper[4902]: W0121 14:36:20.922561 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c52dc8_25a9_44d5_bea6_ecd091f55d54.slice/crio-6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8 WatchSource:0}: Error finding container 6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8: Status 404 returned error can't find the container with id 6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8 Jan 21 14:36:20 crc kubenswrapper[4902]: W0121 14:36:20.925952 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29cc0582_bf2f_4e0b_a351_2d933fdbd52f.slice/crio-3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b WatchSource:0}: Error finding container 3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b: Status 404 returned error can't find the container with id 3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.954298 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:36:20 crc kubenswrapper[4902]: I0121 14:36:20.954555 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9hktz" Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.015430 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.015674 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.515626057 +0000 UTC m=+143.592459086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.015850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.016253 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.516233608 +0000 UTC m=+143.593066637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.117847 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.118013 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.617980067 +0000 UTC m=+143.694813086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.118512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.118908 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.618898608 +0000 UTC m=+143.695731817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.220252 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.220575 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.720559054 +0000 UTC m=+143.797392083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.264174 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:21 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:21 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:21 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.264251 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.277494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" event={"ID":"c4449adc-13fa-40ee-a058-f42120e5cbee","Type":"ContainerStarted","Data":"62bba82be51fa967b06d22900e1d09c6c76da1fafd581ece10feac43602701f8"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.281475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" event={"ID":"5765190c-206a-481f-a72e-4f119e8881bc","Type":"ContainerStarted","Data":"229695f8f6f067abf648c89e99c16820f01cd4732e9afb31b1504ec8504f054d"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.292558 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" event={"ID":"d7faf6fc-58fe-4457-bb7c-510fce0b60a7","Type":"ContainerStarted","Data":"0c3e4aae80e66678adf08af05318fab97caed13e1501234e440d7512ff465be7"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.295523 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" event={"ID":"2ec3e08f-1312-4857-b152-cde8e51aad05","Type":"ContainerStarted","Data":"2b13e8ed7aebb35bc46b7e659e0a6c8e5700791a96dc79f4223d07d90d3d26d1"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.304085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" event={"ID":"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a","Type":"ContainerStarted","Data":"5faecf633ad835c06cc6b68b846c03acbac8bd192c5a83038bc668a703642a8e"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.305911 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j7zvj" event={"ID":"8285f69a-516d-4bdd-9a14-72d966a0b208","Type":"ContainerStarted","Data":"ace45929ee18b8ed6bd996d412540d21898463ca5bd92667bf2681c7fb613a58"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.317416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerStarted","Data":"f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.318347 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" event={"ID":"a605a533-8d8c-47bc-a04c-0739f97482e6","Type":"ContainerStarted","Data":"728175cf5b005f0e763265b29a94c02e67b5d7f4a0beb6060f3abf9a29d438d1"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.321637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" event={"ID":"9467c15f-f3fe-4594-b97d-0838d43877d1","Type":"ContainerStarted","Data":"4071e7dbb12d0b5b7421e91deda493cfa67e6e80b66300dbcf0a816e28b79ddb"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.322371 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.322762 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.822743637 +0000 UTC m=+143.899576666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.324088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" event={"ID":"92715363-5170-4018-8a70-eb8274f5ffe0","Type":"ContainerStarted","Data":"bfc9408c823c1107405c930c1f3208a46d5e299091b2987803611159f6c61249"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.325850 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" event={"ID":"677296cf-109d-4fc1-b3db-c8312605a5fb","Type":"ContainerStarted","Data":"985054a8b636598e5e894f279de6efdc0a8048e601c8053a559d2a9f7195246d"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.328368 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" event={"ID":"c690c8a8-1bd9-45ff-ba62-93cb7f1ce890","Type":"ContainerStarted","Data":"2f2d30b00791a905e2415cfd98ff0a171ac99c0a6a1631094a55829507d65d4a"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.331010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" event={"ID":"a14f9ae8-3c9b-4618-8255-a55408525925","Type":"ContainerStarted","Data":"57b65b1371c647ab1701b01ad0b5115915570f20b169bf098d2e945d4ecac6ba"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.334865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" event={"ID":"43c52dc8-25a9-44d5-bea6-ecd091f55d54","Type":"ContainerStarted","Data":"6ce89a012e959d0d1c80ae25f0ae117c8a9158c156776e00f960bef70da9a0c8"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.336955 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rfwp8" event={"ID":"95e79e6b-37ae-4e8d-9f95-65e8a8ae49b0","Type":"ContainerStarted","Data":"29c46b2b049bb70511a253a187cfd3e870f8f998a98603437ac414333e5fbd0b"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.361140 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" event={"ID":"031f1783-31bd-4008-ace8-3ede7d0a86de","Type":"ContainerStarted","Data":"65f0fe8a5cf0f9f184876184c7ede6f5a3b3ad412adac5e57a7d116c9d516caa"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.372940 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" event={"ID":"78db9f9d-1963-42d2-9e52-da80ef710af8","Type":"ContainerStarted","Data":"39d128c42c3e2d36432198ccf10990b342a4951d880f9a278870d3aa4ef99268"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.380341 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" event={"ID":"29cc0582-bf2f-4e0b-a351-2d933fdbd52f","Type":"ContainerStarted","Data":"3ee89daab4206d378ab6b73f49d318c45caa0b4e6dfa32ea69e1145e23de646b"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.391788 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nshzl" podStartSLOduration=123.39177037 podStartE2EDuration="2m3.39177037s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:21.391614334 +0000 UTC m=+143.468447353" watchObservedRunningTime="2026-01-21 14:36:21.39177037 +0000 UTC m=+143.468603399" Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.423264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.423586 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:21.923570164 +0000 UTC m=+144.000403193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.427206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerStarted","Data":"d99c6757f9658ce32d4704b76f3d35e4415e44f33b1b27def593d2cbcd31f4c9"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.433384 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerStarted","Data":"367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9"} Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.525557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.526626 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.026612767 +0000 UTC m=+144.103445796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.627571 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.627750 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.127724814 +0000 UTC m=+144.204557843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.628225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.629313 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.129300237 +0000 UTC m=+144.206133266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.728608 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.728709 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.228693596 +0000 UTC m=+144.305526625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.729017 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.729309 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.229302567 +0000 UTC m=+144.306135596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.830720 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.831172 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.331153099 +0000 UTC m=+144.407986128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:21 crc kubenswrapper[4902]: I0121 14:36:21.932803 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:21 crc kubenswrapper[4902]: E0121 14:36:21.933255 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.433237099 +0000 UTC m=+144.510070128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.301903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.302376 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.802358314 +0000 UTC m=+144.879191343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.308307 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:22 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:22 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:22 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.308386 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.316702 4902 csr.go:261] certificate signing request csr-kt5h4 is approved, waiting to be issued Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.325509 4902 csr.go:257] certificate signing request csr-kt5h4 is issued Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.333762 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.404037 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.404538 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:22.904521537 +0000 UTC m=+144.981354566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.453172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w2qlx" event={"ID":"53985f44-9907-48a1-8912-6163cecceba9","Type":"ContainerStarted","Data":"ae7d743409db94cdee09def2038b0ceedb33c18c4ff2f90364d5e897f5a316f8"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.454434 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" event={"ID":"eff3ea2d-0de6-4bad-81e6-f3cac0c4d48f","Type":"ContainerStarted","Data":"082f22371a847422c526a5df9b818eeea07fe3bedc91572314a7978b2df34897"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.456016 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" event={"ID":"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea","Type":"ContainerStarted","Data":"f33eb07e6973a4d0cc5ef537f409a1783f74414b529c8c9a6202d35848934821"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.457301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" event={"ID":"64f0091d-255f-4e9a-a14c-33d240892e51","Type":"ContainerStarted","Data":"6c3057c438dc57c5555cbf5cdffec68b45df0f20e85d9c6333890ef6a0f707b9"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.458945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" event={"ID":"91a268d0-59c0-4e7f-8b78-260d14051e34","Type":"ContainerStarted","Data":"a763e53ce94e844368e3dfb1b991dc8758c148ee43f337fbd0c3c870ee0243f8"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.460598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" event={"ID":"50ac8539-334d-4811-8b3e-7a2df9e4c931","Type":"ContainerStarted","Data":"1fb8341a8f9936e50258c5ba8c5eef0a3f9eb4af2891d0507a32127aa4f5c071"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.461876 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" event={"ID":"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8","Type":"ContainerStarted","Data":"145fae805492337d72ae5a2fdbfd9b5b0428fecd9ac8aa1ba38c30b9a9528893"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.463409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" event={"ID":"a76d0bc2-07ac-4e62-bc5e-3cd58636b3d8","Type":"ContainerStarted","Data":"c25180bfc2c3c27bece0d13daf7ce3e485b9e8029cf3503223c6d80beee0b69b"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.464686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w8c9w" event={"ID":"5a539bc6-7d2e-4eb9-adf5-9dfa82ba307a","Type":"ContainerStarted","Data":"b1bbb1b8d7fdfe70def77ee61b530b99341e6aef5002f6b303b7881e334b9f6a"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.466011 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" event={"ID":"a94b1199-eac7-4e88-ad39-44936959740c","Type":"ContainerStarted","Data":"c73818123ed498ca7d09532898accc3e425730f6c65c803a1b8d46ed926ea7e2"} Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.466793 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.468373 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.468436 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.484135 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gvxn5" podStartSLOduration=124.484114277 podStartE2EDuration="2m4.484114277s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.481485648 +0000 UTC m=+144.558318687" watchObservedRunningTime="2026-01-21 14:36:22.484114277 +0000 UTC m=+144.560947306" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.500513 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-j7zvj" podStartSLOduration=124.50049474 podStartE2EDuration="2m4.50049474s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.49959405 +0000 UTC m=+144.576427089" watchObservedRunningTime="2026-01-21 14:36:22.50049474 +0000 UTC m=+144.577327769" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.505116 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.505579 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.005537111 +0000 UTC m=+145.082370160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.554634 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podStartSLOduration=124.55461452 podStartE2EDuration="2m4.55461452s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.539985515 +0000 UTC m=+144.616818564" watchObservedRunningTime="2026-01-21 14:36:22.55461452 +0000 UTC m=+144.631447539" Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.607028 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.612183 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.112161824 +0000 UTC m=+145.188994843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.708675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.708837 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.208814061 +0000 UTC m=+145.285647090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.708891 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.709245 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.209238585 +0000 UTC m=+145.286071614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.810341 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.810775 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.310759376 +0000 UTC m=+145.387592415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:22 crc kubenswrapper[4902]: I0121 14:36:22.911984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:22 crc kubenswrapper[4902]: E0121 14:36:22.912403 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.412387081 +0000 UTC m=+145.489220110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.013003 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.013521 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.513479148 +0000 UTC m=+145.590312177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.114371 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.114702 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.614685988 +0000 UTC m=+145.691519017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.216584 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.216997 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.716977635 +0000 UTC m=+145.793810674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.261682 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:23 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:23 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:23 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.261765 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.317874 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.318243 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.818228216 +0000 UTC m=+145.895061245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.326920 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 14:31:22 +0000 UTC, rotation deadline is 2026-12-11 07:59:50.815113904 +0000 UTC Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.326958 4902 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7769h23m27.488157783s for next certificate rotation Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.418528 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.418668 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.91865001 +0000 UTC m=+145.995483029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.418826 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.419109 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:23.919101645 +0000 UTC m=+145.995934664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.471613 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.471676 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.485180 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-k2wkm" podStartSLOduration=125.485162648 podStartE2EDuration="2m5.485162648s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.483912576 +0000 UTC m=+145.560745605" watchObservedRunningTime="2026-01-21 14:36:23.485162648 +0000 UTC m=+145.561995677" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.485486 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rfwp8" podStartSLOduration=9.485482109 podStartE2EDuration="9.485482109s" podCreationTimestamp="2026-01-21 14:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:22.556221594 +0000 UTC m=+144.633054623" watchObservedRunningTime="2026-01-21 14:36:23.485482109 +0000 UTC m=+145.562315138" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.502861 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" podStartSLOduration=125.502844356 podStartE2EDuration="2m5.502844356s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.500947282 +0000 UTC m=+145.577780311" watchObservedRunningTime="2026-01-21 14:36:23.502844356 +0000 UTC m=+145.579677385" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.520529 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.520794 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.020764571 +0000 UTC m=+146.097597790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.521416 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.521707 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.021694823 +0000 UTC m=+146.098527852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.529206 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w8c9w" podStartSLOduration=8.529190736 podStartE2EDuration="8.529190736s" podCreationTimestamp="2026-01-21 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.52901389 +0000 UTC m=+145.605846919" watchObservedRunningTime="2026-01-21 14:36:23.529190736 +0000 UTC m=+145.606023765" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.555113 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lrz7m" podStartSLOduration=125.555090322 podStartE2EDuration="2m5.555090322s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.552722792 +0000 UTC m=+145.629555821" watchObservedRunningTime="2026-01-21 14:36:23.555090322 +0000 UTC m=+145.631923351" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.630957 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" podStartSLOduration=125.630939305 podStartE2EDuration="2m5.630939305s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.629825637 +0000 UTC m=+145.706658666" watchObservedRunningTime="2026-01-21 14:36:23.630939305 +0000 UTC m=+145.707772334" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.632658 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lrgnw" podStartSLOduration=125.632651633 podStartE2EDuration="2m5.632651633s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.579451085 +0000 UTC m=+145.656284114" watchObservedRunningTime="2026-01-21 14:36:23.632651633 +0000 UTC m=+145.709484662" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.634769 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.635157 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.135125846 +0000 UTC m=+146.211958925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.661963 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-57jmg" podStartSLOduration=125.661940413 podStartE2EDuration="2m5.661940413s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.660700841 +0000 UTC m=+145.737533870" watchObservedRunningTime="2026-01-21 14:36:23.661940413 +0000 UTC m=+145.738773442" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.693934 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" podStartSLOduration=125.693916343 podStartE2EDuration="2m5.693916343s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:23.69262399 +0000 UTC m=+145.769457019" watchObservedRunningTime="2026-01-21 14:36:23.693916343 +0000 UTC m=+145.770749372" Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.743833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.744258 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.244240224 +0000 UTC m=+146.321073263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.844762 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.845169 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.345152895 +0000 UTC m=+146.421985924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:23 crc kubenswrapper[4902]: I0121 14:36:23.945835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:23 crc kubenswrapper[4902]: E0121 14:36:23.946207 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.446194519 +0000 UTC m=+146.523027548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.046535 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.046723 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.546698016 +0000 UTC m=+146.623531045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.047162 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.047501 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.547487203 +0000 UTC m=+146.624320232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.150365 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.150537 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.650510125 +0000 UTC m=+146.727343154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.151000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.151354 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.651337803 +0000 UTC m=+146.728170982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.252774 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.253325 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.753304149 +0000 UTC m=+146.830137178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.268268 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:24 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:24 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:24 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.268332 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.356337 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.357176 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.857162349 +0000 UTC m=+146.933995378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.457503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.457715 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:24.957690276 +0000 UTC m=+147.034523305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.478553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" event={"ID":"0b8bdd9c-edbc-4a1f-9c97-f74cfcc6d70a","Type":"ContainerStarted","Data":"71fe55aece327484af9828b1af1a5b805cb4b117d45e52bba336220878a998c5"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.480715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" event={"ID":"64f0091d-255f-4e9a-a14c-33d240892e51","Type":"ContainerStarted","Data":"a24f95ad5708c5b01be06b7fa4b5c86df3998a9939540c8781947f01bcb24a49"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.482615 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" event={"ID":"904ff956-5fbf-4e43-aede-3fa612c9bb70","Type":"ContainerStarted","Data":"9fad1be2d5ad45addaed2d543aafd9069c0049478775fea68fe2772d9850328a"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.484055 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" event={"ID":"a14f9ae8-3c9b-4618-8255-a55408525925","Type":"ContainerStarted","Data":"98d62d8fce0bbd01890f37dea5c043635816283893340bb4dba1b50f8dda4eeb"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.485490 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" event={"ID":"43c52dc8-25a9-44d5-bea6-ecd091f55d54","Type":"ContainerStarted","Data":"a34c936cbe9d2177f65ea603131f883feef8a4885ee272f71234971dc9eeaf76"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.487211 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" event={"ID":"677296cf-109d-4fc1-b3db-c8312605a5fb","Type":"ContainerStarted","Data":"5d3fa35637ce99b6e95f1b895ce8cc22e716885755a11ccb7d2fb82b3563b4fd"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.490425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" event={"ID":"d7faf6fc-58fe-4457-bb7c-510fce0b60a7","Type":"ContainerStarted","Data":"17c0305eb637c5e08d8f644f2ec603806f0dddd00b1694e3fd9a5093f24851ce"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.492033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" event={"ID":"a605a533-8d8c-47bc-a04c-0739f97482e6","Type":"ContainerStarted","Data":"a056f469ebc3174bb2315735af44c9ceb9220a327ce4872a92786404bb378a1e"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.492077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" event={"ID":"a605a533-8d8c-47bc-a04c-0739f97482e6","Type":"ContainerStarted","Data":"a51f237ad9632527738df94acc24ebe0204590d7e6cecb281dc33eb89f282193"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.493491 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" event={"ID":"4c2958e3-5395-4efd-8b8f-f3e70fd9fcea","Type":"ContainerStarted","Data":"f355a8f80bba580e989a488ee5ecdb142deb96878eb1a0ddc5eade072c5b2f16"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.495349 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerStarted","Data":"de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.496460 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" event={"ID":"031f1783-31bd-4008-ace8-3ede7d0a86de","Type":"ContainerStarted","Data":"7c797e570cd4e852e50c7663ffad626c90cceaf98256bfe9a699a2f48573dacb"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.498179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" event={"ID":"1eabb5ac-ae9e-4853-a2ec-2d821a4883f8","Type":"ContainerStarted","Data":"686aa13cebf66d5e500d72e6f0fb2d508bd7beec304b17aacbf7db351b9ecb8f"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.498845 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-79b2r" podStartSLOduration=126.498834557 podStartE2EDuration="2m6.498834557s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.49627482 +0000 UTC m=+146.573107849" watchObservedRunningTime="2026-01-21 14:36:24.498834557 +0000 UTC m=+146.575667586" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.503357 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w2qlx" event={"ID":"53985f44-9907-48a1-8912-6163cecceba9","Type":"ContainerStarted","Data":"8a9d0691cf453da8912a0274448564ba0da760180a0ac16dcfdf50e65e2c28d1"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.503521 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.506453 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" event={"ID":"c4449adc-13fa-40ee-a058-f42120e5cbee","Type":"ContainerStarted","Data":"e7cc9adf6c6ffd87a3ba5cf44aa78e6276647d776afe90236ad2abc49306adbd"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.509874 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerStarted","Data":"94d489714adff1e3f3a01c05f46e5c90ec8a2939a9512faab21abd82058a0517"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.512221 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" event={"ID":"92715363-5170-4018-8a70-eb8274f5ffe0","Type":"ContainerStarted","Data":"7287b89ee773336dcfa7454e73ba2ffd95cdfbfccb3263364aa4ebeb9d39bb36"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.512667 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.514639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" event={"ID":"2ec3e08f-1312-4857-b152-cde8e51aad05","Type":"ContainerStarted","Data":"401f5ff346f059614945345b9f74b97f85bb191a26a285bb372eccef2578c0cb"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.514692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" event={"ID":"2ec3e08f-1312-4857-b152-cde8e51aad05","Type":"ContainerStarted","Data":"8fbaeb9559c6a66ad097756767cc6caca67b07fc7b64b7940e97a53cd7c2e934"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.514758 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.516089 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" event={"ID":"29cc0582-bf2f-4e0b-a351-2d933fdbd52f","Type":"ContainerStarted","Data":"ec856e1dac38a3ac7ad9c0f8b2e34674e85ce45ab6848c35bbe4e309c28a1e1b"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.516120 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" event={"ID":"29cc0582-bf2f-4e0b-a351-2d933fdbd52f","Type":"ContainerStarted","Data":"705e84c3b4392269291fa277a420f60eeb0b713869479afcd68ea9dce4fc8152"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.540439 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.542612 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" podStartSLOduration=126.542600476 podStartE2EDuration="2m6.542600476s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.540498145 +0000 UTC m=+146.617331174" watchObservedRunningTime="2026-01-21 14:36:24.542600476 +0000 UTC m=+146.619433505" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.542757 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"98b807a7449c4e4c765b48d7c0ffdf31c9596dab0362dc991807d0d7c743d98d"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.554199 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" event={"ID":"9467c15f-f3fe-4594-b97d-0838d43877d1","Type":"ContainerStarted","Data":"cb785d5c8758cd5e440b2003decbf278c26896fda13150ac48e1d13a3a94fe6d"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.561288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.565077 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.065061695 +0000 UTC m=+147.141894724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.575375 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" event={"ID":"ef463925-8c6c-4217-9bba-e15e1283c4c8","Type":"ContainerStarted","Data":"5c75b66565cf29879a3061886d41628130979d54e2a480a895d697fc88a6b063"} Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.576199 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.579807 4902 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zzmhd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.579869 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" podUID="ef463925-8c6c-4217-9bba-e15e1283c4c8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.627929 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gdzsm" podStartSLOduration=126.627907249 podStartE2EDuration="2m6.627907249s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.580708954 +0000 UTC m=+146.657541983" watchObservedRunningTime="2026-01-21 14:36:24.627907249 +0000 UTC m=+146.704740288" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.666590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.668410 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.168383657 +0000 UTC m=+147.245216686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.691604 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5q929" podStartSLOduration=126.691579561 podStartE2EDuration="2m6.691579561s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.634308095 +0000 UTC m=+146.711141124" watchObservedRunningTime="2026-01-21 14:36:24.691579561 +0000 UTC m=+146.768412600" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.763767 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" podStartSLOduration=126.76374693 podStartE2EDuration="2m6.76374693s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.76228166 +0000 UTC m=+146.839114679" watchObservedRunningTime="2026-01-21 14:36:24.76374693 +0000 UTC m=+146.840579959" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.768703 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lpsnj" podStartSLOduration=126.768688997 podStartE2EDuration="2m6.768688997s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.69363905 +0000 UTC m=+146.770472079" watchObservedRunningTime="2026-01-21 14:36:24.768688997 +0000 UTC m=+146.845522026" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.768948 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.769486 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.269465093 +0000 UTC m=+147.346298122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.838564 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8wfd7" podStartSLOduration=126.838542428 podStartE2EDuration="2m6.838542428s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.799278341 +0000 UTC m=+146.876111370" watchObservedRunningTime="2026-01-21 14:36:24.838542428 +0000 UTC m=+146.915375457" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.839967 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qldcg" podStartSLOduration=126.839959345 podStartE2EDuration="2m6.839959345s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.836870711 +0000 UTC m=+146.913703740" watchObservedRunningTime="2026-01-21 14:36:24.839959345 +0000 UTC m=+146.916792374" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.854057 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qm6gk" podStartSLOduration=126.854025181 podStartE2EDuration="2m6.854025181s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.851915189 +0000 UTC m=+146.928748218" watchObservedRunningTime="2026-01-21 14:36:24.854025181 +0000 UTC m=+146.930858210" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.870385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.870800 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.370769577 +0000 UTC m=+147.447602596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.878314 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2n2xb" podStartSLOduration=126.878294561 podStartE2EDuration="2m6.878294561s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.877636019 +0000 UTC m=+146.954469048" watchObservedRunningTime="2026-01-21 14:36:24.878294561 +0000 UTC m=+146.955127590" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.912640 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w2qlx" podStartSLOduration=9.912617701 podStartE2EDuration="9.912617701s" podCreationTimestamp="2026-01-21 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.908242313 +0000 UTC m=+146.985075342" watchObservedRunningTime="2026-01-21 14:36:24.912617701 +0000 UTC m=+146.989450730" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.936141 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-895km" podStartSLOduration=126.936114955 podStartE2EDuration="2m6.936114955s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.934289513 +0000 UTC m=+147.011122552" watchObservedRunningTime="2026-01-21 14:36:24.936114955 +0000 UTC m=+147.012947994" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.970309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wchr8" podStartSLOduration=126.97028724 podStartE2EDuration="2m6.97028724s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.967985902 +0000 UTC m=+147.044818941" watchObservedRunningTime="2026-01-21 14:36:24.97028724 +0000 UTC m=+147.047120269" Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.971990 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:24 crc kubenswrapper[4902]: E0121 14:36:24.972435 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.472419792 +0000 UTC m=+147.549252831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:24 crc kubenswrapper[4902]: I0121 14:36:24.993982 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hf96t" podStartSLOduration=126.99396197 podStartE2EDuration="2m6.99396197s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:24.992915245 +0000 UTC m=+147.069748274" watchObservedRunningTime="2026-01-21 14:36:24.99396197 +0000 UTC m=+147.070794999" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.036317 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" podStartSLOduration=127.036300291 podStartE2EDuration="2m7.036300291s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.036007651 +0000 UTC m=+147.112840680" watchObservedRunningTime="2026-01-21 14:36:25.036300291 +0000 UTC m=+147.113133310" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.063225 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jt8f8" podStartSLOduration=127.06318231 podStartE2EDuration="2m7.06318231s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.06318248 +0000 UTC m=+147.140015509" watchObservedRunningTime="2026-01-21 14:36:25.06318231 +0000 UTC m=+147.140015339" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.073922 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.074124 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.574094008 +0000 UTC m=+147.650927047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.074348 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.074731 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.57472188 +0000 UTC m=+147.651554909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.175941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.176301 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.676250071 +0000 UTC m=+147.753083110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.176415 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.176828 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.67680901 +0000 UTC m=+147.753642039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.268495 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:25 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:25 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:25 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.268574 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.278227 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.278468 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.778437584 +0000 UTC m=+147.855270613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.278623 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.279016 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.779006474 +0000 UTC m=+147.855839683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.380429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.380648 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.880621338 +0000 UTC m=+147.957454367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.380992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381032 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381107 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381152 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.381173 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.381594 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.88157352 +0000 UTC m=+147.958406549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.382135 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.391951 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.396656 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.396936 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.482568 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.482717 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.982691168 +0000 UTC m=+148.059524197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.482773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.483130 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:25.983121282 +0000 UTC m=+148.059954311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.584520 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.584673 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.084648693 +0000 UTC m=+148.161481722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.584769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.585049 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.085030446 +0000 UTC m=+148.161863475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.589743 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"9d49120bb641011a9a5c94f4d523c824799d05732ca91f56c59e8ca3e763dcc2"} Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.594459 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" event={"ID":"64d60c19-a655-408a-99e4-becff3e27018","Type":"ContainerStarted","Data":"55fbb3c9518f031cb1920eb10c5ab585fd897728f0339f8963423381ffea6d31"} Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.602600 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" event={"ID":"43c52dc8-25a9-44d5-bea6-ecd091f55d54","Type":"ContainerStarted","Data":"f8b5e4e851d84b916aeb4fad715c878d5713caf3d93b6c9eeb8fbd20ef30bf70"} Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.620307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.623333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.634383 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.674201 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" podStartSLOduration=127.67418464 podStartE2EDuration="2m7.67418464s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.101564537 +0000 UTC m=+147.178397566" watchObservedRunningTime="2026-01-21 14:36:25.67418464 +0000 UTC m=+147.751017669" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.674524 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" podStartSLOduration=127.674519151 podStartE2EDuration="2m7.674519151s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.672577755 +0000 UTC m=+147.749410784" watchObservedRunningTime="2026-01-21 14:36:25.674519151 +0000 UTC m=+147.751352180" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.687658 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.689490 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.189437115 +0000 UTC m=+148.266270144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.746631 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q69sb" podStartSLOduration=127.746600547 podStartE2EDuration="2m7.746600547s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.745591643 +0000 UTC m=+147.822424682" watchObservedRunningTime="2026-01-21 14:36:25.746600547 +0000 UTC m=+147.823433576" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.795515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.795932 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.295917214 +0000 UTC m=+148.372750243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.808113 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-b5657" podStartSLOduration=127.808087765 podStartE2EDuration="2m7.808087765s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:25.793227023 +0000 UTC m=+147.870060052" watchObservedRunningTime="2026-01-21 14:36:25.808087765 +0000 UTC m=+147.884920794" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.867474 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zzmhd" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.899811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:25 crc kubenswrapper[4902]: E0121 14:36:25.900262 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.400240869 +0000 UTC m=+148.477073908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.940746 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:25 crc kubenswrapper[4902]: I0121 14:36:25.961094 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wpch6" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.001230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.001634 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.501619816 +0000 UTC m=+148.578452845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.053917 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.054836 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.061437 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.078153 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.103152 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.104162 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.604146591 +0000 UTC m=+148.680979620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206293 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206670 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.206686 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.207136 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.707109241 +0000 UTC m=+148.783942260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.246421 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.247372 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.258228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.271358 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.301247 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:26 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:26 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:26 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.301317 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309215 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309467 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309542 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.309593 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.309877 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.809849763 +0000 UTC m=+148.886682812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.310259 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.310549 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.392601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"certified-operators-xgf94\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.399329 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411701 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411742 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.411793 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.416603 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.416876 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:26.916853039 +0000 UTC m=+148.993686068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.424327 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.467437 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.468361 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.508838 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"community-operators-fqq5l\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.512710 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.512949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.512992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.513026 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.513217 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.013191525 +0000 UTC m=+149.090024564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.521373 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.618322 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619405 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619779 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.619827 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.620123 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.120112359 +0000 UTC m=+149.196945388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.620163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.636448 4902 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.680595 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.688829 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.691476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"befc9f7492ac06652b09ddd286943da31e3a5f5dbb26be0910f72aaabab97b0c"} Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.720513 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.720868 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.220852474 +0000 UTC m=+149.297685503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.749452 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.796641 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"certified-operators-fl2j4\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.821758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.821882 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.821976 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.822055 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.823807 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.323791792 +0000 UTC m=+149.400624821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.869620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.923556 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.923769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.923906 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.423883364 +0000 UTC m=+149.500716393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924109 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924556 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.924577 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:26 crc kubenswrapper[4902]: E0121 14:36:26.924706 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.424697472 +0000 UTC m=+149.501530501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.958598 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.959673 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.960138 4902 patch_prober.go:28] interesting pod/console-f9d7485db-9nw4v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.960185 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9nw4v" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 14:36:26 crc kubenswrapper[4902]: I0121 14:36:26.985941 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"community-operators-77b9d\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.025402 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:27 crc kubenswrapper[4902]: E0121 14:36:27.033873 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.533850781 +0000 UTC m=+149.610683810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.087456 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.144062 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: E0121 14:36:27.144416 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.644399587 +0000 UTC m=+149.721232616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9nccj" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.477892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:27 crc kubenswrapper[4902]: E0121 14:36:27.478377 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:36:27.978356184 +0000 UTC m=+150.055189213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.480706 4902 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T14:36:26.636475152Z","Handler":null,"Name":""} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.483740 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:27 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:27 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:27 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.483781 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.484158 4902 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.484188 4902 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.490546 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.501112 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.529145 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.579539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: W0121 14:36:27.580231 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c88f2d9_944f_408e_bfe3_41c8baac6175.slice/crio-3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b WatchSource:0}: Error finding container 3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b: Status 404 returned error can't find the container with id 3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.615370 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.615406 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.732401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"957b30e503d8781975d7142b54c3fab6a51781cda36a2ca1f026e1e15ff8a621"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.741183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"913ba68ed42badf1566d83b31341f33e2cf15048515ef8a81733207860372fdd"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.772401 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.772434 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.786617 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.788433 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.798346 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9nccj\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.805374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" event={"ID":"2c1970f7-f131-4594-b396-d33bb9776e33","Type":"ContainerStarted","Data":"e526b015b8182fec25aa1aa89eb2c511cfba5d47504213dfab85c3a51e848ef1"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.810237 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.818441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a9171c0531db9becb11a908b3a4c754e11832d0e0365adfed9405abc5f51867c"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.821883 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.822665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerStarted","Data":"053e278127621d0aa574a001b3d7f98dd3d2a28ff0f85cb3abcc55c7682fa466"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.832360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerStarted","Data":"f45d37f7ac621a924bdf6d205f6dcfb689dbb7f1904649cc3bbc2a2dac0231b6"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.839354 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-v4hs9" podStartSLOduration=12.839338694 podStartE2EDuration="12.839338694s" podCreationTimestamp="2026-01-21 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:27.83833554 +0000 UTC m=+149.915168569" watchObservedRunningTime="2026-01-21 14:36:27.839338694 +0000 UTC m=+149.916171723" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.851702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerStarted","Data":"3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b"} Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.888525 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.894514 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:36:27 crc kubenswrapper[4902]: W0121 14:36:27.926202 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008311b3_7361_4466_aacd_01bbaa16f6df.slice/crio-e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca WatchSource:0}: Error finding container e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca: Status 404 returned error can't find the container with id e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.968633 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:27 crc kubenswrapper[4902]: I0121 14:36:27.976433 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.025901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.063872 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.079064 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tgt87" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.135082 4902 patch_prober.go:28] interesting pod/apiserver-76f77b778f-x9bhh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]log ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]etcd ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/max-in-flight-filter ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 14:36:28 crc kubenswrapper[4902]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 14:36:28 crc kubenswrapper[4902]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 14:36:28 crc kubenswrapper[4902]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 14:36:28 crc kubenswrapper[4902]: livez check failed Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.135457 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" podUID="64d60c19-a655-408a-99e4-becff3e27018" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206154 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206210 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206497 4902 patch_prober.go:28] interesting pod/downloads-7954f5f757-j7zvj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.206551 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-j7zvj" podUID="8285f69a-516d-4bdd-9a14-72d966a0b208" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.254293 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.262843 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.266581 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.266668 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.270848 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:28 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:28 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.270892 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.286216 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.335907 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.346696 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.347524 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.384901 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.385762 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.385911 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.385983 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.449958 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.450307 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.450414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.450889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.451276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.503312 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"redhat-marketplace-d7hf5\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.624936 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.625277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.626652 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.645972 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.652987 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.653215 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.653312 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.754726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.754797 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.754833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.755320 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.755568 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.776843 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"redhat-marketplace-dl5zx\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.866036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5ad6d502e9f3d07aeeec65a6e1e3c4988e877e7cb0d54afdaee4b4dbed4dd820"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.866124 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.868725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"50739c0e40ed1ca48d87578bfa5f765d33859970fe56bbf1abe3197ae90ef763"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.871175 4902 generic.go:334] "Generic (PLEG): container finished" podID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerID="4e686b959372288a5668349b284ecd38a38ea795d787fa0d477db1901cf9976c" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.871406 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"4e686b959372288a5668349b284ecd38a38ea795d787fa0d477db1901cf9976c"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.872706 4902 generic.go:334] "Generic (PLEG): container finished" podID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerID="a1204ec2b5e76cfd0fb6167da34f831607a537ca3ed511cbf74c9c91b780c2f9" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.872777 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"a1204ec2b5e76cfd0fb6167da34f831607a537ca3ed511cbf74c9c91b780c2f9"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.873954 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.888314 4902 generic.go:334] "Generic (PLEG): container finished" podID="008311b3-7361-4466-aacd-01bbaa16f6df" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.888702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.888731 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerStarted","Data":"e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.896524 4902 generic.go:334] "Generic (PLEG): container finished" podID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" exitCode=0 Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.896586 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.899342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerStarted","Data":"7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.899372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerStarted","Data":"72fa44f70a1a8a5c4b377700f7f908db843af15c5da8c33d09c4e26da32bbe19"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.899773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.910034 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"09559999a949a133e59b5653d492ad8fb626c72ab6bfa2ced952444315ae1b5a"} Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.919551 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wgsqz" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.944017 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:36:28 crc kubenswrapper[4902]: I0121 14:36:28.972438 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" podStartSLOduration=130.972423208 podStartE2EDuration="2m10.972423208s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:28.970603577 +0000 UTC m=+151.047436606" watchObservedRunningTime="2026-01-21 14:36:28.972423208 +0000 UTC m=+151.049256237" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.191614 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:36:29 crc kubenswrapper[4902]: W0121 14:36:29.204639 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19482ae1_f291_4111_83b5_56fa37063508.slice/crio-024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd WatchSource:0}: Error finding container 024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd: Status 404 returned error can't find the container with id 024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.233269 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.237312 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.240996 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.265973 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.266073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.266099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.267297 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:29 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:29 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:29 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.267376 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.309117 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.365871 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367533 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367848 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.367959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.368066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.386833 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"redhat-operators-98c57\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.568970 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.624736 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.625937 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.640309 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.672196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.672239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.672351 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.703689 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.707238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.709430 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.709787 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.713255 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.773883 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.773956 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774190 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774210 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774488 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.774740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.823664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"redhat-operators-chl56\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.876257 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.876360 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.876474 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.918163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.924962 4902 generic.go:334] "Generic (PLEG): container finished" podID="19482ae1-f291-4111-83b5-56fa37063508" containerID="9b604d27ef105b652ea19c99e2ae291eacdb1348bd4b5e106e90424e329a7180" exitCode=0 Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.925068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"9b604d27ef105b652ea19c99e2ae291eacdb1348bd4b5e106e90424e329a7180"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.925098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerStarted","Data":"024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.927820 4902 generic.go:334] "Generic (PLEG): container finished" podID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerID="3c2863c18937166425d91344f3ec1614a7f70129ffe061c9c5ee80eb31756b3f" exitCode=0 Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.928448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"3c2863c18937166425d91344f3ec1614a7f70129ffe061c9c5ee80eb31756b3f"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.928478 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerStarted","Data":"6d22776fe71b564cc70ae18c09c444bfb7b9c6605b6f0f8a041e615143a16c69"} Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.962799 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:36:29 crc kubenswrapper[4902]: I0121 14:36:29.985305 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.025384 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.261343 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:30 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:30 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:30 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.261647 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.333549 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.421112 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:36:30 crc kubenswrapper[4902]: W0121 14:36:30.443587 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod50d5a74e_3e40_493a_bb17_3de7c5ff8b26.slice/crio-b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a WatchSource:0}: Error finding container b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a: Status 404 returned error can't find the container with id b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.951616 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"0863c2ef512883dfa5c8cb15d84b8d3e8007faf5a420481b07e81570d0bbc513"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.951885 4902 generic.go:334] "Generic (PLEG): container finished" podID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerID="0863c2ef512883dfa5c8cb15d84b8d3e8007faf5a420481b07e81570d0bbc513" exitCode=0 Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.952021 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerStarted","Data":"5d200290d772299c202f1a65fa0061ebdcb1ccceea36fa735b536ebf39ba3497"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.957298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerStarted","Data":"b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.967475 4902 generic.go:334] "Generic (PLEG): container finished" podID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerID="b780515dde8ccd794f02bd3dc6005c6baf519de90ebd8e42d401146a27f9e971" exitCode=0 Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.968174 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"b780515dde8ccd794f02bd3dc6005c6baf519de90ebd8e42d401146a27f9e971"} Jan 21 14:36:30 crc kubenswrapper[4902]: I0121 14:36:30.968238 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerStarted","Data":"c2b8854fe921d56cd0a1e4ec23fb7eafebd1972826e58b8204b172f529d4bbf4"} Jan 21 14:36:31 crc kubenswrapper[4902]: I0121 14:36:31.262620 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:31 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:31 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:31 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:31 crc kubenswrapper[4902]: I0121 14:36:31.262690 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.039229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerStarted","Data":"71543878641ee45cb3815d79c247489e1669d37bce0ec428e1d26077ad1a012f"} Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.062412 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.062396078 podStartE2EDuration="3.062396078s" podCreationTimestamp="2026-01-21 14:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:32.059285543 +0000 UTC m=+154.136118572" watchObservedRunningTime="2026-01-21 14:36:32.062396078 +0000 UTC m=+154.139229107" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.262412 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:32 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:32 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:32 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.262491 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.772172 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:32 crc kubenswrapper[4902]: I0121 14:36:32.777360 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x9bhh" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.049401 4902 generic.go:334] "Generic (PLEG): container finished" podID="70656800-9429-43df-a1cb-7c8617d23b3f" containerID="de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542" exitCode=0 Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.049471 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerDied","Data":"de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542"} Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.051768 4902 generic.go:334] "Generic (PLEG): container finished" podID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerID="71543878641ee45cb3815d79c247489e1669d37bce0ec428e1d26077ad1a012f" exitCode=0 Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.052734 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerDied","Data":"71543878641ee45cb3815d79c247489e1669d37bce0ec428e1d26077ad1a012f"} Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.265141 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:33 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:33 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:33 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.265217 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.387573 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w2qlx" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.588410 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.589493 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.594175 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.595191 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.601986 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.670109 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.670209 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.771172 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.771250 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.771335 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.792951 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:33 crc kubenswrapper[4902]: I0121 14:36:33.970324 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.263565 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:34 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:34 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:34 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.263625 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.414910 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.476030 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") pod \"70656800-9429-43df-a1cb-7c8617d23b3f\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590866 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") pod \"70656800-9429-43df-a1cb-7c8617d23b3f\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590971 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") pod \"70656800-9429-43df-a1cb-7c8617d23b3f\" (UID: \"70656800-9429-43df-a1cb-7c8617d23b3f\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.590999 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") pod \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.591017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") pod \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\" (UID: \"50d5a74e-3e40-493a-bb17-3de7c5ff8b26\") " Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.592222 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50d5a74e-3e40-493a-bb17-3de7c5ff8b26" (UID: "50d5a74e-3e40-493a-bb17-3de7c5ff8b26"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.592673 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume" (OuterVolumeSpecName: "config-volume") pod "70656800-9429-43df-a1cb-7c8617d23b3f" (UID: "70656800-9429-43df-a1cb-7c8617d23b3f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.600843 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.604515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70656800-9429-43df-a1cb-7c8617d23b3f" (UID: "70656800-9429-43df-a1cb-7c8617d23b3f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.610255 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50d5a74e-3e40-493a-bb17-3de7c5ff8b26" (UID: "50d5a74e-3e40-493a-bb17-3de7c5ff8b26"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.607707 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t" (OuterVolumeSpecName: "kube-api-access-sfg9t") pod "70656800-9429-43df-a1cb-7c8617d23b3f" (UID: "70656800-9429-43df-a1cb-7c8617d23b3f"). InnerVolumeSpecName "kube-api-access-sfg9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693734 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfg9t\" (UniqueName: \"kubernetes.io/projected/70656800-9429-43df-a1cb-7c8617d23b3f-kube-api-access-sfg9t\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693783 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693793 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50d5a74e-3e40-493a-bb17-3de7c5ff8b26-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693803 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70656800-9429-43df-a1cb-7c8617d23b3f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:34 crc kubenswrapper[4902]: I0121 14:36:34.693814 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70656800-9429-43df-a1cb-7c8617d23b3f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.126382 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerStarted","Data":"968f9e6a19298b7a86bae544ca30fb68936bb23e6bab950c272feb412b841333"} Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.154485 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"50d5a74e-3e40-493a-bb17-3de7c5ff8b26","Type":"ContainerDied","Data":"b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a"} Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.154542 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77d51c439caba1ec86969bf0f6ee5c490a80cbf97747b1dbe225a9d8179f65a" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.154650 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.176081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" event={"ID":"70656800-9429-43df-a1cb-7c8617d23b3f","Type":"ContainerDied","Data":"d99c6757f9658ce32d4704b76f3d35e4415e44f33b1b27def593d2cbcd31f4c9"} Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.176173 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99c6757f9658ce32d4704b76f3d35e4415e44f33b1b27def593d2cbcd31f4c9" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.176266 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw" Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.264575 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:35 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:35 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:35 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:35 crc kubenswrapper[4902]: I0121 14:36:35.264646 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.209179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerStarted","Data":"a824c02aab82ea190dd1e12ccf4ee2855e18f36e4ecd719a6e1b635979dd07b4"} Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.263487 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:36 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:36 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:36 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.263557 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.958680 4902 patch_prober.go:28] interesting pod/console-f9d7485db-9nw4v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 14:36:36 crc kubenswrapper[4902]: I0121 14:36:36.958776 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9nw4v" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.230624 4902 generic.go:334] "Generic (PLEG): container finished" podID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerID="a824c02aab82ea190dd1e12ccf4ee2855e18f36e4ecd719a6e1b635979dd07b4" exitCode=0 Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.230670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerDied","Data":"a824c02aab82ea190dd1e12ccf4ee2855e18f36e4ecd719a6e1b635979dd07b4"} Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.261851 4902 patch_prober.go:28] interesting pod/router-default-5444994796-2lccn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:36:37 crc kubenswrapper[4902]: [-]has-synced failed: reason withheld Jan 21 14:36:37 crc kubenswrapper[4902]: [+]process-running ok Jan 21 14:36:37 crc kubenswrapper[4902]: healthz check failed Jan 21 14:36:37 crc kubenswrapper[4902]: I0121 14:36:37.261937 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2lccn" podUID="52fccef5-5bbc-4411-9eb0-fcca74e3c3f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:36:38 crc kubenswrapper[4902]: I0121 14:36:38.212630 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-j7zvj" Jan 21 14:36:38 crc kubenswrapper[4902]: I0121 14:36:38.304968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:38 crc kubenswrapper[4902]: I0121 14:36:38.308289 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2lccn" Jan 21 14:36:40 crc kubenswrapper[4902]: I0121 14:36:40.540604 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:40 crc kubenswrapper[4902]: I0121 14:36:40.546698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05d94e6a-249a-484c-8895-085e81f1dfaa-metrics-certs\") pod \"network-metrics-daemon-kq588\" (UID: \"05d94e6a-249a-484c-8895-085e81f1dfaa\") " pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:40 crc kubenswrapper[4902]: I0121 14:36:40.637607 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq588" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.030853 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.082382 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") pod \"4d6a681c-0b89-4f72-9f57-64c0915af789\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.082520 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") pod \"4d6a681c-0b89-4f72-9f57-64c0915af789\" (UID: \"4d6a681c-0b89-4f72-9f57-64c0915af789\") " Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.083232 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d6a681c-0b89-4f72-9f57-64c0915af789" (UID: "4d6a681c-0b89-4f72-9f57-64c0915af789"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.088138 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d6a681c-0b89-4f72-9f57-64c0915af789" (UID: "4d6a681c-0b89-4f72-9f57-64c0915af789"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.184133 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d6a681c-0b89-4f72-9f57-64c0915af789-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.184179 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d6a681c-0b89-4f72-9f57-64c0915af789-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.288275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d6a681c-0b89-4f72-9f57-64c0915af789","Type":"ContainerDied","Data":"968f9e6a19298b7a86bae544ca30fb68936bb23e6bab950c272feb412b841333"} Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.288325 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968f9e6a19298b7a86bae544ca30fb68936bb23e6bab950c272feb412b841333" Jan 21 14:36:43 crc kubenswrapper[4902]: I0121 14:36:43.288322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:36:45 crc kubenswrapper[4902]: I0121 14:36:45.159710 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq588"] Jan 21 14:36:45 crc kubenswrapper[4902]: I0121 14:36:45.304623 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq588" event={"ID":"05d94e6a-249a-484c-8895-085e81f1dfaa","Type":"ContainerStarted","Data":"c6e48ac868a0c03714792fe2441343448e65f2b639ab804ec1c8fcf4b54f624f"} Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.312161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq588" event={"ID":"05d94e6a-249a-484c-8895-085e81f1dfaa","Type":"ContainerStarted","Data":"5ec995d56589a9eddf5c407fa139e3266611a239641b68415251855851035bca"} Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.724780 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.725337 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" containerID="cri-o://367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9" gracePeriod=30 Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.741052 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.741300 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" containerID="cri-o://6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144" gracePeriod=30 Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.976596 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:46 crc kubenswrapper[4902]: I0121 14:36:46.980439 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.320167 4902 generic.go:334] "Generic (PLEG): container finished" podID="c7158f8a-be32-4700-857f-faf9157f99f5" containerID="367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9" exitCode=0 Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.320210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerDied","Data":"367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9"} Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.321752 4902 generic.go:334] "Generic (PLEG): container finished" podID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerID="6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144" exitCode=0 Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.322356 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerDied","Data":"6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144"} Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.733223 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.733284 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.769658 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.769721 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:36:47 crc kubenswrapper[4902]: I0121 14:36:47.827574 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:36:48 crc kubenswrapper[4902]: I0121 14:36:48.283602 4902 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tn2zp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 21 14:36:48 crc kubenswrapper[4902]: I0121 14:36:48.283689 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.660317 4902 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.661022 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.733637 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:36:57 crc kubenswrapper[4902]: I0121 14:36:57.733700 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:36:58 crc kubenswrapper[4902]: I0121 14:36:58.307163 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-67gqb" Jan 21 14:36:59 crc kubenswrapper[4902]: I0121 14:36:59.283447 4902 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tn2zp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:36:59 crc kubenswrapper[4902]: I0121 14:36:59.283557 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:37:06 crc kubenswrapper[4902]: I0121 14:37:06.079207 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:37:07 crc kubenswrapper[4902]: I0121 14:37:07.733110 4902 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xrcxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 14:37:07 crc kubenswrapper[4902]: I0121 14:37:07.733468 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 14:37:08 crc kubenswrapper[4902]: E0121 14:37:08.722772 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 14:37:08 crc kubenswrapper[4902]: E0121 14:37:08.722981 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq29v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-chl56_openshift-marketplace(64be302e-c39a-4e45-8b5d-07b8819a6eb0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:08 crc kubenswrapper[4902]: E0121 14:37:08.724190 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" Jan 21 14:37:09 crc kubenswrapper[4902]: I0121 14:37:09.282985 4902 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tn2zp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:37:09 crc kubenswrapper[4902]: I0121 14:37:09.283098 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:37:09 crc kubenswrapper[4902]: E0121 14:37:09.646941 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.393959 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:37:10 crc kubenswrapper[4902]: E0121 14:37:10.394319 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" containerName="collect-profiles" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394332 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" containerName="collect-profiles" Jan 21 14:37:10 crc kubenswrapper[4902]: E0121 14:37:10.394342 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394348 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: E0121 14:37:10.394378 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394385 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394511 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d5a74e-3e40-493a-bb17-3de7c5ff8b26" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394546 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6a681c-0b89-4f72-9f57-64c0915af789" containerName="pruner" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.394555 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" containerName="collect-profiles" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.395062 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.398894 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.399174 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.400704 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.480888 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.481432 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.583232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.583403 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.583524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.615164 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:10 crc kubenswrapper[4902]: I0121 14:37:10.746248 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.509623 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.509806 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xgf94_openshift-marketplace(cc91d441-7f4a-45f8-8f71-1f04e4ade80c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.511590 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xgf94" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.995622 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.995835 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ntv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fqq5l_openshift-marketplace(bc7ccff8-2db2-4663-9565-42f2357e4bda): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:12 crc kubenswrapper[4902]: E0121 14:37:12.997029 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.786300 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.787300 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.799981 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.849438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.849503 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.849532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951683 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951730 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951743 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.951817 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:14 crc kubenswrapper[4902]: I0121 14:37:14.972505 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"installer-9-crc\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.104625 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.443766 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.443925 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xgf94" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.547920 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.548289 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhvxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dl5zx_openshift-marketplace(4504c44c-17da-4a32-ac81-7efc9ec6b1cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.550090 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dl5zx" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.571510 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.612827 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.613644 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.613656 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.613746 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" containerName="controller-manager" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.614103 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.623463 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.623592 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wc5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-d7hf5_openshift-marketplace(19482ae1-f291-4111-83b5-56fa37063508): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.624860 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-d7hf5" podUID="19482ae1-f291-4111-83b5-56fa37063508" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.625498 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.629653 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.629762 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2f8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-98c57_openshift-marketplace(ea3b3336-0258-4b66-bd33-dd4e01543236): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.630978 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-98c57" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.646751 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.646905 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sp9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-77b9d_openshift-marketplace(008311b3-7361-4466-aacd-01bbaa16f6df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.648640 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-77b9d" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661446 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661541 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661642 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.661664 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.662994 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") pod \"c7158f8a-be32-4700-857f-faf9157f99f5\" (UID: \"c7158f8a-be32-4700-857f-faf9157f99f5\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663539 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663543 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663628 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663729 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.663926 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.668968 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.670730 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.670908 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nlk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fl2j4_openshift-marketplace(3c88f2d9-944f-408e-bfe3-41c8baac6175): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:37:15 crc kubenswrapper[4902]: E0121 14:37:15.672326 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fl2j4" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.677843 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh" (OuterVolumeSpecName: "kube-api-access-q56gh") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "kube-api-access-q56gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.678412 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config" (OuterVolumeSpecName: "config") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.680296 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7158f8a-be32-4700-857f-faf9157f99f5" (UID: "c7158f8a-be32-4700-857f-faf9157f99f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765468 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765496 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765518 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765585 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765640 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765654 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q56gh\" (UniqueName: \"kubernetes.io/projected/c7158f8a-be32-4700-857f-faf9157f99f5-kube-api-access-q56gh\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765666 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7158f8a-be32-4700-857f-faf9157f99f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.765677 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7158f8a-be32-4700-857f-faf9157f99f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.767749 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.768150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.769586 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.776691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.779806 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.784911 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"controller-manager-758f874869-2jb7w\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866478 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866557 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866601 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.866656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") pod \"01ee90aa-9465-4cd2-97a0-ce735d557649\" (UID: \"01ee90aa-9465-4cd2-97a0-ce735d557649\") " Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.867997 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca" (OuterVolumeSpecName: "client-ca") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.868064 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config" (OuterVolumeSpecName: "config") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.870225 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr" (OuterVolumeSpecName: "kube-api-access-gbhnr") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "kube-api-access-gbhnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.870777 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ee90aa-9465-4cd2-97a0-ce735d557649" (UID: "01ee90aa-9465-4cd2-97a0-ce735d557649"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.955922 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968250 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbhnr\" (UniqueName: \"kubernetes.io/projected/01ee90aa-9465-4cd2-97a0-ce735d557649-kube-api-access-gbhnr\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968308 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ee90aa-9465-4cd2-97a0-ce735d557649-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968326 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.968343 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ee90aa-9465-4cd2-97a0-ce735d557649-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:15 crc kubenswrapper[4902]: I0121 14:37:15.995966 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.000410 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.183616 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:16 crc kubenswrapper[4902]: W0121 14:37:16.195082 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db6f852_8480_412f_a9bf_9afd18c41d83.slice/crio-1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3 WatchSource:0}: Error finding container 1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3: Status 404 returned error can't find the container with id 1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3 Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.494440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq588" event={"ID":"05d94e6a-249a-484c-8895-085e81f1dfaa","Type":"ContainerStarted","Data":"89f9eec850349a8d70abaf29a6e16ed37c20bae82e8785de31be9800941385f7"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.498468 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.498513 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf" event={"ID":"01ee90aa-9465-4cd2-97a0-ce735d557649","Type":"ContainerDied","Data":"1d6f20bc21db99ffc3b51f783b09029cf7dec2c4ed9b3a8a2f63bf561b414a3a"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.498578 4902 scope.go:117] "RemoveContainer" containerID="6352bb96995cea97dbd91f19d4ac33bcf83056c8d4e8ed01ff2fda9bf228a144" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.504937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerStarted","Data":"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.504974 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerStarted","Data":"1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.505836 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.508248 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerStarted","Data":"417166d5735591f122acc8577f8a8ed9b2f4076fce7e42a14e97f0765b04b1d6"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.508309 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerStarted","Data":"3eed38a6c389fd9ea1b4b51ae79af02a7862cc31d751100e19c8ae0ae07b17e7"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.515098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerStarted","Data":"ce71892c9cc4a5eca454b5acdd2876bc8fdf1542a231264709d1d8546488cc23"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.515155 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerStarted","Data":"1e0e3b99d3e199bf0a5109aed2aaf9e421c0eab2e9ccba48ebac5e8687fa5207"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.518064 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.518536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tn2zp" event={"ID":"c7158f8a-be32-4700-857f-faf9157f99f5","Type":"ContainerDied","Data":"31b5818a193a42b1200764cd8a3a2ec82450c46b99cee82fd307ec9a84582b72"} Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.518576 4902 scope.go:117] "RemoveContainer" containerID="367d869d9b3c4b737b065ed87b6bd46066ee2a10f6733ab3b357221abf8fd7a9" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.520698 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-77b9d" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.520694 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dl5zx" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.520861 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fl2j4" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.526363 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-d7hf5" podUID="19482ae1-f291-4111-83b5-56fa37063508" Jan 21 14:37:16 crc kubenswrapper[4902]: E0121 14:37:16.526425 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-98c57" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.526472 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.548689 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kq588" podStartSLOduration=178.54866189 podStartE2EDuration="2m58.54866189s" podCreationTimestamp="2026-01-21 14:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.538769841 +0000 UTC m=+198.615602870" watchObservedRunningTime="2026-01-21 14:37:16.54866189 +0000 UTC m=+198.625494919" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.660326 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.662672 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tn2zp"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.695059 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.701405 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xrcxf"] Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.715085 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=6.715067197 podStartE2EDuration="6.715067197s" podCreationTimestamp="2026-01-21 14:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.713726651 +0000 UTC m=+198.790559680" watchObservedRunningTime="2026-01-21 14:37:16.715067197 +0000 UTC m=+198.791900226" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.775803 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" podStartSLOduration=10.775787846 podStartE2EDuration="10.775787846s" podCreationTimestamp="2026-01-21 14:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.773170877 +0000 UTC m=+198.850003916" watchObservedRunningTime="2026-01-21 14:37:16.775787846 +0000 UTC m=+198.852620865" Jan 21 14:37:16 crc kubenswrapper[4902]: I0121 14:37:16.776256 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.776248662 podStartE2EDuration="2.776248662s" podCreationTimestamp="2026-01-21 14:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:16.753247544 +0000 UTC m=+198.830080573" watchObservedRunningTime="2026-01-21 14:37:16.776248662 +0000 UTC m=+198.853081691" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.525811 4902 generic.go:334] "Generic (PLEG): container finished" podID="d86c450a-56ce-4439-8396-f6d87fee149c" containerID="417166d5735591f122acc8577f8a8ed9b2f4076fce7e42a14e97f0765b04b1d6" exitCode=0 Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.526027 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerDied","Data":"417166d5735591f122acc8577f8a8ed9b2f4076fce7e42a14e97f0765b04b1d6"} Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.745176 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:17 crc kubenswrapper[4902]: E0121 14:37:17.745511 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.745523 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.745640 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" containerName="route-controller-manager" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.746017 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.749411 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750246 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750335 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750483 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.750652 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.751245 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.757824 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.773134 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.773200 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798515 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798600 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798666 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.798710 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899472 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899907 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.899934 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.901031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.901073 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.905755 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:17 crc kubenswrapper[4902]: I0121 14:37:17.915427 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"route-controller-manager-677957ff68-7pb2f\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.062598 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.306129 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ee90aa-9465-4cd2-97a0-ce735d557649" path="/var/lib/kubelet/pods/01ee90aa-9465-4cd2-97a0-ce735d557649/volumes" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.307107 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7158f8a-be32-4700-857f-faf9157f99f5" path="/var/lib/kubelet/pods/c7158f8a-be32-4700-857f-faf9157f99f5/volumes" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.474921 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:18 crc kubenswrapper[4902]: W0121 14:37:18.483598 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64242610_9b91_49bc_9400_12298973aad0.slice/crio-471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330 WatchSource:0}: Error finding container 471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330: Status 404 returned error can't find the container with id 471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330 Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.541815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerStarted","Data":"471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330"} Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.813517 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915196 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") pod \"d86c450a-56ce-4439-8396-f6d87fee149c\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915335 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") pod \"d86c450a-56ce-4439-8396-f6d87fee149c\" (UID: \"d86c450a-56ce-4439-8396-f6d87fee149c\") " Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915444 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d86c450a-56ce-4439-8396-f6d87fee149c" (UID: "d86c450a-56ce-4439-8396-f6d87fee149c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.915633 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d86c450a-56ce-4439-8396-f6d87fee149c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:18 crc kubenswrapper[4902]: I0121 14:37:18.922134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d86c450a-56ce-4439-8396-f6d87fee149c" (UID: "d86c450a-56ce-4439-8396-f6d87fee149c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.017246 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d86c450a-56ce-4439-8396-f6d87fee149c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.550700 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerStarted","Data":"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152"} Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.552748 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.555183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d86c450a-56ce-4439-8396-f6d87fee149c","Type":"ContainerDied","Data":"3eed38a6c389fd9ea1b4b51ae79af02a7862cc31d751100e19c8ae0ae07b17e7"} Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.555212 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eed38a6c389fd9ea1b4b51ae79af02a7862cc31d751100e19c8ae0ae07b17e7" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.555276 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.561202 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:19 crc kubenswrapper[4902]: I0121 14:37:19.575739 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" podStartSLOduration=13.57571361 podStartE2EDuration="13.57571361s" podCreationTimestamp="2026-01-21 14:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:19.572886114 +0000 UTC m=+201.649719143" watchObservedRunningTime="2026-01-21 14:37:19.57571361 +0000 UTC m=+201.652546649" Jan 21 14:37:24 crc kubenswrapper[4902]: I0121 14:37:24.588971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerStarted","Data":"b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5"} Jan 21 14:37:25 crc kubenswrapper[4902]: I0121 14:37:25.597520 4902 generic.go:334] "Generic (PLEG): container finished" podID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerID="b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5" exitCode=0 Jan 21 14:37:25 crc kubenswrapper[4902]: I0121 14:37:25.597612 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5"} Jan 21 14:37:27 crc kubenswrapper[4902]: I0121 14:37:27.608481 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerStarted","Data":"d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780"} Jan 21 14:37:27 crc kubenswrapper[4902]: I0121 14:37:27.612383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerStarted","Data":"f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31"} Jan 21 14:37:27 crc kubenswrapper[4902]: I0121 14:37:27.656905 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chl56" podStartSLOduration=2.543483582 podStartE2EDuration="58.656889857s" podCreationTimestamp="2026-01-21 14:36:29 +0000 UTC" firstStartedPulling="2026-01-21 14:36:30.976454967 +0000 UTC m=+153.053288006" lastFinishedPulling="2026-01-21 14:37:27.089861252 +0000 UTC m=+209.166694281" observedRunningTime="2026-01-21 14:37:27.654543116 +0000 UTC m=+209.731376145" watchObservedRunningTime="2026-01-21 14:37:27.656889857 +0000 UTC m=+209.733722886" Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.617371 4902 generic.go:334] "Generic (PLEG): container finished" podID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerID="32894c3de1e75b38e7274c12ebe204f101b1ece066ced856e2483329caa616b0" exitCode=0 Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.617452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"32894c3de1e75b38e7274c12ebe204f101b1ece066ced856e2483329caa616b0"} Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.619817 4902 generic.go:334] "Generic (PLEG): container finished" podID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerID="d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780" exitCode=0 Jan 21 14:37:28 crc kubenswrapper[4902]: I0121 14:37:28.619849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780"} Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.628414 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerStarted","Data":"dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb"} Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.631453 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerStarted","Data":"02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6"} Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.652015 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgf94" podStartSLOduration=3.4561969550000002 podStartE2EDuration="1m3.651998245s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.87365728 +0000 UTC m=+150.950490309" lastFinishedPulling="2026-01-21 14:37:29.06945857 +0000 UTC m=+211.146291599" observedRunningTime="2026-01-21 14:37:29.649626424 +0000 UTC m=+211.726459463" watchObservedRunningTime="2026-01-21 14:37:29.651998245 +0000 UTC m=+211.728831274" Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.675844 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dl5zx" podStartSLOduration=2.606489773 podStartE2EDuration="1m1.675827941s" podCreationTimestamp="2026-01-21 14:36:28 +0000 UTC" firstStartedPulling="2026-01-21 14:36:29.929754593 +0000 UTC m=+152.006587622" lastFinishedPulling="2026-01-21 14:37:28.999092761 +0000 UTC m=+211.075925790" observedRunningTime="2026-01-21 14:37:29.67316543 +0000 UTC m=+211.749998459" watchObservedRunningTime="2026-01-21 14:37:29.675827941 +0000 UTC m=+211.752660970" Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.986697 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:29 crc kubenswrapper[4902]: I0121 14:37:29.986753 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:30 crc kubenswrapper[4902]: I0121 14:37:30.637529 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerStarted","Data":"b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8"} Jan 21 14:37:30 crc kubenswrapper[4902]: I0121 14:37:30.642796 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerStarted","Data":"f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84"} Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.276247 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" probeResult="failure" output=< Jan 21 14:37:31 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 14:37:31 crc kubenswrapper[4902]: > Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.651660 4902 generic.go:334] "Generic (PLEG): container finished" podID="008311b3-7361-4466-aacd-01bbaa16f6df" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" exitCode=0 Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.651745 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc"} Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.656338 4902 generic.go:334] "Generic (PLEG): container finished" podID="19482ae1-f291-4111-83b5-56fa37063508" containerID="f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84" exitCode=0 Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.656391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84"} Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.661654 4902 generic.go:334] "Generic (PLEG): container finished" podID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerID="b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8" exitCode=0 Jan 21 14:37:31 crc kubenswrapper[4902]: I0121 14:37:31.661785 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.667522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerStarted","Data":"62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.669623 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerStarted","Data":"8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.671801 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerStarted","Data":"d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.686576 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d7hf5" podStartSLOduration=2.579450808 podStartE2EDuration="1m4.686550553s" podCreationTimestamp="2026-01-21 14:36:28 +0000 UTC" firstStartedPulling="2026-01-21 14:36:29.927971892 +0000 UTC m=+152.004804921" lastFinishedPulling="2026-01-21 14:37:32.035071637 +0000 UTC m=+214.111904666" observedRunningTime="2026-01-21 14:37:32.68149294 +0000 UTC m=+214.758325969" watchObservedRunningTime="2026-01-21 14:37:32.686550553 +0000 UTC m=+214.763383582" Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.695871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerStarted","Data":"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed"} Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.721298 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fqq5l" podStartSLOduration=3.5085660279999997 podStartE2EDuration="1m6.721279862s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.874085705 +0000 UTC m=+150.950918724" lastFinishedPulling="2026-01-21 14:37:32.086799499 +0000 UTC m=+214.163632558" observedRunningTime="2026-01-21 14:37:32.718861409 +0000 UTC m=+214.795694438" watchObservedRunningTime="2026-01-21 14:37:32.721279862 +0000 UTC m=+214.798112891" Jan 21 14:37:32 crc kubenswrapper[4902]: I0121 14:37:32.746362 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77b9d" podStartSLOduration=3.361184334 podStartE2EDuration="1m6.74633047s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.892057202 +0000 UTC m=+150.968890231" lastFinishedPulling="2026-01-21 14:37:32.277203338 +0000 UTC m=+214.354036367" observedRunningTime="2026-01-21 14:37:32.742445957 +0000 UTC m=+214.819278996" watchObservedRunningTime="2026-01-21 14:37:32.74633047 +0000 UTC m=+214.823163499" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.702427 4902 generic.go:334] "Generic (PLEG): container finished" podID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerID="8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99" exitCode=0 Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.702497 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99"} Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851008 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzb8l"] Jan 21 14:37:33 crc kubenswrapper[4902]: E0121 14:37:33.851237 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86c450a-56ce-4439-8396-f6d87fee149c" containerName="pruner" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851251 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86c450a-56ce-4439-8396-f6d87fee149c" containerName="pruner" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851351 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86c450a-56ce-4439-8396-f6d87fee149c" containerName="pruner" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.851709 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:33 crc kubenswrapper[4902]: I0121 14:37:33.876453 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzb8l"] Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022328 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7v7\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-kube-api-access-nf7v7\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb2b422b-c8b3-48ec-901a-e9da16f653fa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022400 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022449 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb2b422b-c8b3-48ec-901a-e9da16f653fa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022509 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-trusted-ca\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022540 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-tls\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-bound-sa-token\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.022616 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-certificates\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.048035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-certificates\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124131 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf7v7\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-kube-api-access-nf7v7\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124149 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb2b422b-c8b3-48ec-901a-e9da16f653fa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124182 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb2b422b-c8b3-48ec-901a-e9da16f653fa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-trusted-ca\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124254 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-tls\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.124279 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-bound-sa-token\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.125438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb2b422b-c8b3-48ec-901a-e9da16f653fa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.125688 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-certificates\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.125958 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb2b422b-c8b3-48ec-901a-e9da16f653fa-trusted-ca\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.131872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-registry-tls\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.132998 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb2b422b-c8b3-48ec-901a-e9da16f653fa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.147439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-bound-sa-token\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.148162 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf7v7\" (UniqueName: \"kubernetes.io/projected/bb2b422b-c8b3-48ec-901a-e9da16f653fa-kube-api-access-nf7v7\") pod \"image-registry-66df7c8f76-gzb8l\" (UID: \"bb2b422b-c8b3-48ec-901a-e9da16f653fa\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.166441 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.635282 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzb8l"] Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.722134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerStarted","Data":"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2"} Jan 21 14:37:34 crc kubenswrapper[4902]: I0121 14:37:34.723713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" event={"ID":"bb2b422b-c8b3-48ec-901a-e9da16f653fa","Type":"ContainerStarted","Data":"3f908700c8cf853274ca5eeb6feb0f07e0be5242f0793e3f63013efe713ad8fe"} Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.731242 4902 generic.go:334] "Generic (PLEG): container finished" podID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" exitCode=0 Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.731328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2"} Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.733275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" event={"ID":"bb2b422b-c8b3-48ec-901a-e9da16f653fa","Type":"ContainerStarted","Data":"fab65958ab03b4830e04b8ca296f80ab4d88555ba18b09a05ef3c49b008e6ad5"} Jan 21 14:37:35 crc kubenswrapper[4902]: I0121 14:37:35.733933 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.401012 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.401083 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.450621 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.470154 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" podStartSLOduration=3.4701363069999998 podStartE2EDuration="3.470136307s" podCreationTimestamp="2026-01-21 14:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:35.771703763 +0000 UTC m=+217.848536782" watchObservedRunningTime="2026-01-21 14:37:36.470136307 +0000 UTC m=+218.546969336" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.618968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.619013 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.659454 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:36 crc kubenswrapper[4902]: I0121 14:37:36.782314 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.088962 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.089143 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.133172 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:37 crc kubenswrapper[4902]: I0121 14:37:37.780455 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.626328 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.628320 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.672988 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.783036 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.951881 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:38 crc kubenswrapper[4902]: I0121 14:37:38.951942 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:39 crc kubenswrapper[4902]: I0121 14:37:39.006584 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:39 crc kubenswrapper[4902]: I0121 14:37:39.792849 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.032271 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.079233 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.230275 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.758313 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerStarted","Data":"326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4"} Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.762728 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerStarted","Data":"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19"} Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.763570 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77b9d" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" containerID="cri-o://318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" gracePeriod=2 Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.783464 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98c57" podStartSLOduration=2.877360427 podStartE2EDuration="1m11.783441707s" podCreationTimestamp="2026-01-21 14:36:29 +0000 UTC" firstStartedPulling="2026-01-21 14:36:30.966507951 +0000 UTC m=+153.043340980" lastFinishedPulling="2026-01-21 14:37:39.872589231 +0000 UTC m=+221.949422260" observedRunningTime="2026-01-21 14:37:40.780476796 +0000 UTC m=+222.857309845" watchObservedRunningTime="2026-01-21 14:37:40.783441707 +0000 UTC m=+222.860274736" Jan 21 14:37:40 crc kubenswrapper[4902]: I0121 14:37:40.800036 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fl2j4" podStartSLOduration=3.802091478 podStartE2EDuration="1m14.800019035s" podCreationTimestamp="2026-01-21 14:36:26 +0000 UTC" firstStartedPulling="2026-01-21 14:36:28.897768285 +0000 UTC m=+150.974601314" lastFinishedPulling="2026-01-21 14:37:39.895695842 +0000 UTC m=+221.972528871" observedRunningTime="2026-01-21 14:37:40.799572839 +0000 UTC m=+222.876405868" watchObservedRunningTime="2026-01-21 14:37:40.800019035 +0000 UTC m=+222.876852064" Jan 21 14:37:40 crc kubenswrapper[4902]: E0121 14:37:40.831293 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008311b3_7361_4466_aacd_01bbaa16f6df.slice/crio-318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.259024 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.425593 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") pod \"008311b3-7361-4466-aacd-01bbaa16f6df\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.425915 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") pod \"008311b3-7361-4466-aacd-01bbaa16f6df\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.425975 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") pod \"008311b3-7361-4466-aacd-01bbaa16f6df\" (UID: \"008311b3-7361-4466-aacd-01bbaa16f6df\") " Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.426670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities" (OuterVolumeSpecName: "utilities") pod "008311b3-7361-4466-aacd-01bbaa16f6df" (UID: "008311b3-7361-4466-aacd-01bbaa16f6df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.427724 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.440831 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q" (OuterVolumeSpecName: "kube-api-access-2sp9q") pod "008311b3-7361-4466-aacd-01bbaa16f6df" (UID: "008311b3-7361-4466-aacd-01bbaa16f6df"). InnerVolumeSpecName "kube-api-access-2sp9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.498926 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008311b3-7361-4466-aacd-01bbaa16f6df" (UID: "008311b3-7361-4466-aacd-01bbaa16f6df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.528769 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sp9q\" (UniqueName: \"kubernetes.io/projected/008311b3-7361-4466-aacd-01bbaa16f6df-kube-api-access-2sp9q\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.528806 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008311b3-7361-4466-aacd-01bbaa16f6df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769752 4902 generic.go:334] "Generic (PLEG): container finished" podID="008311b3-7361-4466-aacd-01bbaa16f6df" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" exitCode=0 Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed"} Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769858 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77b9d" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77b9d" event={"ID":"008311b3-7361-4466-aacd-01bbaa16f6df","Type":"ContainerDied","Data":"e20ca1be73e15aa077e2b594ce74f037b1aa06b9991f1e73b39c1c1eae4ecbca"} Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.769920 4902 scope.go:117] "RemoveContainer" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.787960 4902 scope.go:117] "RemoveContainer" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.801460 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.805952 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77b9d"] Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.826012 4902 scope.go:117] "RemoveContainer" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.839947 4902 scope.go:117] "RemoveContainer" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" Jan 21 14:37:41 crc kubenswrapper[4902]: E0121 14:37:41.840359 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed\": container with ID starting with 318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed not found: ID does not exist" containerID="318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.840466 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed"} err="failed to get container status \"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed\": rpc error: code = NotFound desc = could not find container \"318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed\": container with ID starting with 318d96f54194dcdbb751e1ba25482047c069d5701aa9e97682cb6ef76cfb56ed not found: ID does not exist" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.840573 4902 scope.go:117] "RemoveContainer" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" Jan 21 14:37:41 crc kubenswrapper[4902]: E0121 14:37:41.841114 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc\": container with ID starting with 226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc not found: ID does not exist" containerID="226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.841175 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc"} err="failed to get container status \"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc\": rpc error: code = NotFound desc = could not find container \"226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc\": container with ID starting with 226f8033dfc960b2048f5a5746cfbc89e10af030144bd958ae6325089083fcbc not found: ID does not exist" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.841223 4902 scope.go:117] "RemoveContainer" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" Jan 21 14:37:41 crc kubenswrapper[4902]: E0121 14:37:41.862099 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9\": container with ID starting with 063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9 not found: ID does not exist" containerID="063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9" Jan 21 14:37:41 crc kubenswrapper[4902]: I0121 14:37:41.862163 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9"} err="failed to get container status \"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9\": rpc error: code = NotFound desc = could not find container \"063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9\": container with ID starting with 063b36f18a1bb6d459f21845a8ce4f47fc36116393e41f5afc69b58116e1a5b9 not found: ID does not exist" Jan 21 14:37:42 crc kubenswrapper[4902]: I0121 14:37:42.302493 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" path="/var/lib/kubelet/pods/008311b3-7361-4466-aacd-01bbaa16f6df/volumes" Jan 21 14:37:42 crc kubenswrapper[4902]: I0121 14:37:42.628417 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:37:42 crc kubenswrapper[4902]: I0121 14:37:42.629053 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dl5zx" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" containerID="cri-o://02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6" gracePeriod=2 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.625877 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.626376 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chl56" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" containerID="cri-o://f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31" gracePeriod=2 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.935285 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.935700 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fl2j4" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" containerID="cri-o://86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.941202 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.941503 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xgf94" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" containerID="cri-o://dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.961545 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.961813 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" containerID="cri-o://d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.965364 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.968602 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.970439 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 14:37:43 crc kubenswrapper[4902]: E0121 14:37:43.970490 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-fqq5l" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.981307 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.981562 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" containerID="cri-o://f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0" gracePeriod=30 Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.992515 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:37:43 crc kubenswrapper[4902]: I0121 14:37:43.992887 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d7hf5" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" containerID="cri-o://62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4" gracePeriod=30 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.008752 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4vkp"] Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.009595 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-utilities" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009622 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-utilities" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.009640 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-content" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009648 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="extract-content" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.009659 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009670 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.009917 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="008311b3-7361-4466-aacd-01bbaa16f6df" containerName="registry-server" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.010646 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.023088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.023611 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98c57" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" containerID="cri-o://326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4" gracePeriod=30 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.028707 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4vkp"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.185169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.185540 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddwg\" (UniqueName: \"kubernetes.io/projected/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-kube-api-access-zddwg\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.185567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.219448 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.286554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zddwg\" (UniqueName: \"kubernetes.io/projected/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-kube-api-access-zddwg\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.286633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.286682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.289131 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.293887 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.322614 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zddwg\" (UniqueName: \"kubernetes.io/projected/021a0823-715d-4b67-b5b2-b52ec6d6c7e8-kube-api-access-zddwg\") pod \"marketplace-operator-79b997595-z4vkp\" (UID: \"021a0823-715d-4b67-b5b2-b52ec6d6c7e8\") " pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.387495 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.620671 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.711564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") pod \"3c88f2d9-944f-408e-bfe3-41c8baac6175\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.711651 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") pod \"3c88f2d9-944f-408e-bfe3-41c8baac6175\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.711705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") pod \"3c88f2d9-944f-408e-bfe3-41c8baac6175\" (UID: \"3c88f2d9-944f-408e-bfe3-41c8baac6175\") " Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.714585 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities" (OuterVolumeSpecName: "utilities") pod "3c88f2d9-944f-408e-bfe3-41c8baac6175" (UID: "3c88f2d9-944f-408e-bfe3-41c8baac6175"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.719766 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7" (OuterVolumeSpecName: "kube-api-access-9nlk7") pod "3c88f2d9-944f-408e-bfe3-41c8baac6175" (UID: "3c88f2d9-944f-408e-bfe3-41c8baac6175"). InnerVolumeSpecName "kube-api-access-9nlk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.813088 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.813429 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlk7\" (UniqueName: \"kubernetes.io/projected/3c88f2d9-944f-408e-bfe3-41c8baac6175-kube-api-access-9nlk7\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.816622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c88f2d9-944f-408e-bfe3-41c8baac6175" (UID: "3c88f2d9-944f-408e-bfe3-41c8baac6175"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.826278 4902 generic.go:334] "Generic (PLEG): container finished" podID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerID="02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.830026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.835057 4902 generic.go:334] "Generic (PLEG): container finished" podID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerID="dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.835204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.836874 4902 generic.go:334] "Generic (PLEG): container finished" podID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerID="f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.836980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.838195 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-98c57_ea3b3336-0258-4b66-bd33-dd4e01543236/registry-server/0.log" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.839467 4902 generic.go:334] "Generic (PLEG): container finished" podID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerID="326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4" exitCode=1 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.839571 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.841453 4902 generic.go:334] "Generic (PLEG): container finished" podID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.841536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843449 4902 generic.go:334] "Generic (PLEG): container finished" podID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843684 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fl2j4" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843923 4902 scope.go:117] "RemoveContainer" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.843911 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fl2j4" event={"ID":"3c88f2d9-944f-408e-bfe3-41c8baac6175","Type":"ContainerDied","Data":"3ede55dacea16111f6202914e24a4d44b7e914f57126067eef1577b038c06a0b"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.845357 4902 generic.go:334] "Generic (PLEG): container finished" podID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerID="f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.845449 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerDied","Data":"f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.850823 4902 generic.go:334] "Generic (PLEG): container finished" podID="19482ae1-f291-4111-83b5-56fa37063508" containerID="62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4" exitCode=0 Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.850882 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4"} Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.883453 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-98c57_ea3b3336-0258-4b66-bd33-dd4e01543236/registry-server/0.log" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.884310 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.895006 4902 scope.go:117] "RemoveContainer" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.897611 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.898697 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.901864 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fl2j4"] Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.914438 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c88f2d9-944f-408e-bfe3-41c8baac6175-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.947677 4902 scope.go:117] "RemoveContainer" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.989837 4902 scope.go:117] "RemoveContainer" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.990374 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19\": container with ID starting with 86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19 not found: ID does not exist" containerID="86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990408 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19"} err="failed to get container status \"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19\": rpc error: code = NotFound desc = could not find container \"86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19\": container with ID starting with 86b6060c2966a2a50319b812d97955dab95b6cc6a863bb304f1ff00fc261cd19 not found: ID does not exist" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990433 4902 scope.go:117] "RemoveContainer" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.990845 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2\": container with ID starting with d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2 not found: ID does not exist" containerID="d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990859 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2"} err="failed to get container status \"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2\": rpc error: code = NotFound desc = could not find container \"d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2\": container with ID starting with d47b9202768c23da8b9ddf08fa74c03fd7af32e18474775bda06b918e62fb8d2 not found: ID does not exist" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.990870 4902 scope.go:117] "RemoveContainer" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" Jan 21 14:37:44 crc kubenswrapper[4902]: E0121 14:37:44.991799 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6\": container with ID starting with d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6 not found: ID does not exist" containerID="d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6" Jan 21 14:37:44 crc kubenswrapper[4902]: I0121 14:37:44.991816 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6"} err="failed to get container status \"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6\": rpc error: code = NotFound desc = could not find container \"d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6\": container with ID starting with d21ddf243e31e4b7e0f150105b5b8e874517c71e43e94e38c2969e06db499de6 not found: ID does not exist" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015274 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") pod \"ea3b3336-0258-4b66-bd33-dd4e01543236\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015324 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") pod \"ea3b3336-0258-4b66-bd33-dd4e01543236\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015347 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") pod \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015387 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") pod \"ea3b3336-0258-4b66-bd33-dd4e01543236\" (UID: \"ea3b3336-0258-4b66-bd33-dd4e01543236\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015460 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") pod \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.015488 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") pod \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\" (UID: \"cc91d441-7f4a-45f8-8f71-1f04e4ade80c\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.016265 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities" (OuterVolumeSpecName: "utilities") pod "ea3b3336-0258-4b66-bd33-dd4e01543236" (UID: "ea3b3336-0258-4b66-bd33-dd4e01543236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.021688 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t" (OuterVolumeSpecName: "kube-api-access-t2f8t") pod "ea3b3336-0258-4b66-bd33-dd4e01543236" (UID: "ea3b3336-0258-4b66-bd33-dd4e01543236"). InnerVolumeSpecName "kube-api-access-t2f8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.021745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz" (OuterVolumeSpecName: "kube-api-access-8kswz") pod "cc91d441-7f4a-45f8-8f71-1f04e4ade80c" (UID: "cc91d441-7f4a-45f8-8f71-1f04e4ade80c"). InnerVolumeSpecName "kube-api-access-8kswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.022085 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities" (OuterVolumeSpecName: "utilities") pod "cc91d441-7f4a-45f8-8f71-1f04e4ade80c" (UID: "cc91d441-7f4a-45f8-8f71-1f04e4ade80c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029650 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029688 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2f8t\" (UniqueName: \"kubernetes.io/projected/ea3b3336-0258-4b66-bd33-dd4e01543236-kube-api-access-t2f8t\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029702 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kswz\" (UniqueName: \"kubernetes.io/projected/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-kube-api-access-8kswz\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.029717 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.068798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc91d441-7f4a-45f8-8f71-1f04e4ade80c" (UID: "cc91d441-7f4a-45f8-8f71-1f04e4ade80c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.111882 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.130937 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc91d441-7f4a-45f8-8f71-1f04e4ade80c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.152445 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.158242 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.198995 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea3b3336-0258-4b66-bd33-dd4e01543236" (UID: "ea3b3336-0258-4b66-bd33-dd4e01543236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.232600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") pod \"179de16d-c6d0-4cda-8d1f-8c2396301175\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.232660 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") pod \"179de16d-c6d0-4cda-8d1f-8c2396301175\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.232730 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") pod \"179de16d-c6d0-4cda-8d1f-8c2396301175\" (UID: \"179de16d-c6d0-4cda-8d1f-8c2396301175\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.233115 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea3b3336-0258-4b66-bd33-dd4e01543236-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.233162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "179de16d-c6d0-4cda-8d1f-8c2396301175" (UID: "179de16d-c6d0-4cda-8d1f-8c2396301175"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.236162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "179de16d-c6d0-4cda-8d1f-8c2396301175" (UID: "179de16d-c6d0-4cda-8d1f-8c2396301175"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.236285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth" (OuterVolumeSpecName: "kube-api-access-trmth") pod "179de16d-c6d0-4cda-8d1f-8c2396301175" (UID: "179de16d-c6d0-4cda-8d1f-8c2396301175"). InnerVolumeSpecName "kube-api-access-trmth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.284973 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.287838 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334308 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") pod \"bc7ccff8-2db2-4663-9565-42f2357e4bda\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") pod \"19482ae1-f291-4111-83b5-56fa37063508\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334453 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") pod \"19482ae1-f291-4111-83b5-56fa37063508\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334531 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") pod \"bc7ccff8-2db2-4663-9565-42f2357e4bda\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334558 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") pod \"19482ae1-f291-4111-83b5-56fa37063508\" (UID: \"19482ae1-f291-4111-83b5-56fa37063508\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.334590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") pod \"bc7ccff8-2db2-4663-9565-42f2357e4bda\" (UID: \"bc7ccff8-2db2-4663-9565-42f2357e4bda\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.335503 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities" (OuterVolumeSpecName: "utilities") pod "bc7ccff8-2db2-4663-9565-42f2357e4bda" (UID: "bc7ccff8-2db2-4663-9565-42f2357e4bda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339219 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g" (OuterVolumeSpecName: "kube-api-access-5wc5g") pod "19482ae1-f291-4111-83b5-56fa37063508" (UID: "19482ae1-f291-4111-83b5-56fa37063508"). InnerVolumeSpecName "kube-api-access-5wc5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339474 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339517 4902 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/179de16d-c6d0-4cda-8d1f-8c2396301175-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.339533 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trmth\" (UniqueName: \"kubernetes.io/projected/179de16d-c6d0-4cda-8d1f-8c2396301175-kube-api-access-trmth\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.340095 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities" (OuterVolumeSpecName: "utilities") pod "19482ae1-f291-4111-83b5-56fa37063508" (UID: "19482ae1-f291-4111-83b5-56fa37063508"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.340594 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4" (OuterVolumeSpecName: "kube-api-access-4ntv4") pod "bc7ccff8-2db2-4663-9565-42f2357e4bda" (UID: "bc7ccff8-2db2-4663-9565-42f2357e4bda"). InnerVolumeSpecName "kube-api-access-4ntv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.353361 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z4vkp"] Jan 21 14:37:45 crc kubenswrapper[4902]: W0121 14:37:45.360248 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod021a0823_715d_4b67_b5b2_b52ec6d6c7e8.slice/crio-c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3 WatchSource:0}: Error finding container c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3: Status 404 returned error can't find the container with id c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3 Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.360825 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19482ae1-f291-4111-83b5-56fa37063508" (UID: "19482ae1-f291-4111-83b5-56fa37063508"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.391794 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc7ccff8-2db2-4663-9565-42f2357e4bda" (UID: "bc7ccff8-2db2-4663-9565-42f2357e4bda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.440883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") pod \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441023 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") pod \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") pod \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441081 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") pod \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441118 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") pod \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\" (UID: \"4504c44c-17da-4a32-ac81-7efc9ec6b1cb\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441142 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") pod \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\" (UID: \"64be302e-c39a-4e45-8b5d-07b8819a6eb0\") " Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441403 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ntv4\" (UniqueName: \"kubernetes.io/projected/bc7ccff8-2db2-4663-9565-42f2357e4bda-kube-api-access-4ntv4\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441416 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wc5g\" (UniqueName: \"kubernetes.io/projected/19482ae1-f291-4111-83b5-56fa37063508-kube-api-access-5wc5g\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441425 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441435 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7ccff8-2db2-4663-9565-42f2357e4bda-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441443 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441452 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19482ae1-f291-4111-83b5-56fa37063508-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities" (OuterVolumeSpecName: "utilities") pod "64be302e-c39a-4e45-8b5d-07b8819a6eb0" (UID: "64be302e-c39a-4e45-8b5d-07b8819a6eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.441891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities" (OuterVolumeSpecName: "utilities") pod "4504c44c-17da-4a32-ac81-7efc9ec6b1cb" (UID: "4504c44c-17da-4a32-ac81-7efc9ec6b1cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.443657 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb" (OuterVolumeSpecName: "kube-api-access-xhvxb") pod "4504c44c-17da-4a32-ac81-7efc9ec6b1cb" (UID: "4504c44c-17da-4a32-ac81-7efc9ec6b1cb"). InnerVolumeSpecName "kube-api-access-xhvxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.443956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v" (OuterVolumeSpecName: "kube-api-access-jq29v") pod "64be302e-c39a-4e45-8b5d-07b8819a6eb0" (UID: "64be302e-c39a-4e45-8b5d-07b8819a6eb0"). InnerVolumeSpecName "kube-api-access-jq29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.468168 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4504c44c-17da-4a32-ac81-7efc9ec6b1cb" (UID: "4504c44c-17da-4a32-ac81-7efc9ec6b1cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544331 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544376 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544389 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhvxb\" (UniqueName: \"kubernetes.io/projected/4504c44c-17da-4a32-ac81-7efc9ec6b1cb-kube-api-access-xhvxb\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544406 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq29v\" (UniqueName: \"kubernetes.io/projected/64be302e-c39a-4e45-8b5d-07b8819a6eb0-kube-api-access-jq29v\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.544418 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.587599 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64be302e-c39a-4e45-8b5d-07b8819a6eb0" (UID: "64be302e-c39a-4e45-8b5d-07b8819a6eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.645400 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be302e-c39a-4e45-8b5d-07b8819a6eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.856796 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" event={"ID":"021a0823-715d-4b67-b5b2-b52ec6d6c7e8","Type":"ContainerStarted","Data":"c3f8fd209d74ae56cf38d9f7ce9254888506eeb74c63efd98ea97d301dddaac3"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.858566 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-98c57_ea3b3336-0258-4b66-bd33-dd4e01543236/registry-server/0.log" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.859859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98c57" event={"ID":"ea3b3336-0258-4b66-bd33-dd4e01543236","Type":"ContainerDied","Data":"5d200290d772299c202f1a65fa0061ebdcb1ccceea36fa735b536ebf39ba3497"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.859924 4902 scope.go:117] "RemoveContainer" containerID="326f8caa7a461344deaec70d776c5a6beda6a87d6c452e21917d3f11867ce5f4" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.859877 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98c57" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.863227 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqq5l" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.863221 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqq5l" event={"ID":"bc7ccff8-2db2-4663-9565-42f2357e4bda","Type":"ContainerDied","Data":"f45d37f7ac621a924bdf6d205f6dcfb689dbb7f1904649cc3bbc2a2dac0231b6"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.866165 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.866640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xm5cd" event={"ID":"179de16d-c6d0-4cda-8d1f-8c2396301175","Type":"ContainerDied","Data":"55ed63decb6129b185123334a130753c5c33884bc167ffd4431cd04957e60efe"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.871662 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d7hf5" event={"ID":"19482ae1-f291-4111-83b5-56fa37063508","Type":"ContainerDied","Data":"024d9ea6e07bff7f0ecb8463467da83d20693d50a025a771bbc45b531070e2fd"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.871677 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d7hf5" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.875388 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dl5zx" event={"ID":"4504c44c-17da-4a32-ac81-7efc9ec6b1cb","Type":"ContainerDied","Data":"6d22776fe71b564cc70ae18c09c444bfb7b9c6605b6f0f8a041e615143a16c69"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.875522 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dl5zx" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.880136 4902 scope.go:117] "RemoveContainer" containerID="8cab10508e95ad152396df1527b6a482788a8695c58e273be61d4a6811398d99" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.880287 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chl56" event={"ID":"64be302e-c39a-4e45-8b5d-07b8819a6eb0","Type":"ContainerDied","Data":"c2b8854fe921d56cd0a1e4ec23fb7eafebd1972826e58b8204b172f529d4bbf4"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.880391 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chl56" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.884753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgf94" event={"ID":"cc91d441-7f4a-45f8-8f71-1f04e4ade80c","Type":"ContainerDied","Data":"053e278127621d0aa574a001b3d7f98dd3d2a28ff0f85cb3abcc55c7682fa466"} Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.884868 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgf94" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.901763 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.906172 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98c57"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.917035 4902 scope.go:117] "RemoveContainer" containerID="0863c2ef512883dfa5c8cb15d84b8d3e8007faf5a420481b07e81570d0bbc513" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.921837 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.925096 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fqq5l"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.948148 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.950975 4902 scope.go:117] "RemoveContainer" containerID="d0628adcdeba28f3f91c06f4ebfb0013ccce0fcc7b38a42868bbe4850a301bc0" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.954554 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xm5cd"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.962901 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.967303 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d7hf5"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.972945 4902 scope.go:117] "RemoveContainer" containerID="b890da03655fa68fe9c4fc2736b49d628cd2dfdd1eb8c53e0f6a92826e80b3e8" Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.979966 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.985264 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dl5zx"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.991521 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.996310 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chl56"] Jan 21 14:37:45 crc kubenswrapper[4902]: I0121 14:37:45.998972 4902 scope.go:117] "RemoveContainer" containerID="a1204ec2b5e76cfd0fb6167da34f831607a537ca3ed511cbf74c9c91b780c2f9" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.013174 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.016315 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xgf94"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.019592 4902 scope.go:117] "RemoveContainer" containerID="f47ac0d984bd534f8dbc95c34421c4c7e222580c524d56fef0a86d89726b4ac0" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.044310 4902 scope.go:117] "RemoveContainer" containerID="62550e0c563096e2eaf93d50c31f0aa211becf7530f77d8467b535ecccc978a4" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.057219 4902 scope.go:117] "RemoveContainer" containerID="f826ca325a2cf505b8c815980387763af7f3ba9503a5207a8722972de88aec84" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.070251 4902 scope.go:117] "RemoveContainer" containerID="9b604d27ef105b652ea19c99e2ae291eacdb1348bd4b5e106e90424e329a7180" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.086354 4902 scope.go:117] "RemoveContainer" containerID="02dbc8575387c68b7070c5eedfc10c67111d8864380f2d313d7bb3003fd6c4e6" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.098994 4902 scope.go:117] "RemoveContainer" containerID="32894c3de1e75b38e7274c12ebe204f101b1ece066ced856e2483329caa616b0" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.110378 4902 scope.go:117] "RemoveContainer" containerID="3c2863c18937166425d91344f3ec1614a7f70129ffe061c9c5ee80eb31756b3f" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.122275 4902 scope.go:117] "RemoveContainer" containerID="f05bf1ddcb70474108853c8a55dc880b6e2d426b5f8cf04726fd5568c5d20f31" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.302642 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" path="/var/lib/kubelet/pods/179de16d-c6d0-4cda-8d1f-8c2396301175/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.303358 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19482ae1-f291-4111-83b5-56fa37063508" path="/var/lib/kubelet/pods/19482ae1-f291-4111-83b5-56fa37063508/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.304327 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" path="/var/lib/kubelet/pods/3c88f2d9-944f-408e-bfe3-41c8baac6175/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.305912 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" path="/var/lib/kubelet/pods/4504c44c-17da-4a32-ac81-7efc9ec6b1cb/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.306623 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" path="/var/lib/kubelet/pods/64be302e-c39a-4e45-8b5d-07b8819a6eb0/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.307734 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" path="/var/lib/kubelet/pods/bc7ccff8-2db2-4663-9565-42f2357e4bda/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.308424 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" path="/var/lib/kubelet/pods/cc91d441-7f4a-45f8-8f71-1f04e4ade80c/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.309065 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" path="/var/lib/kubelet/pods/ea3b3336-0258-4b66-bd33-dd4e01543236/volumes" Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.731868 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.732240 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" containerID="cri-o://7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" gracePeriod=30 Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.829056 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:46 crc kubenswrapper[4902]: I0121 14:37:46.829297 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" containerID="cri-o://763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" gracePeriod=30 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.038451 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ppndl"] Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.039536 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.039613 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.039685 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.039746 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.039811 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.039889 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040071 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040236 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040372 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040436 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040544 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040610 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040675 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040848 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.040910 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.040980 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041080 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041150 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041212 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041265 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041372 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041426 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041479 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041539 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041600 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041661 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041716 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041774 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041835 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.041888 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.041949 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042002 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042112 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042191 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-utilities" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042275 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042335 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042423 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042508 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042576 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042671 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042747 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042810 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.042899 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.042958 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="extract-content" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043166 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="19482ae1-f291-4111-83b5-56fa37063508" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043254 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c88f2d9-944f-408e-bfe3-41c8baac6175" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043338 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3b3336-0258-4b66-bd33-dd4e01543236" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043395 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc91d441-7f4a-45f8-8f71-1f04e4ade80c" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043457 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4504c44c-17da-4a32-ac81-7efc9ec6b1cb" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043525 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="179de16d-c6d0-4cda-8d1f-8c2396301175" containerName="marketplace-operator" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043618 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7ccff8-2db2-4663-9565-42f2357e4bda" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.043678 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="64be302e-c39a-4e45-8b5d-07b8819a6eb0" containerName="registry-server" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.044979 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.046966 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppndl"] Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.048719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.067782 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwn2l\" (UniqueName: \"kubernetes.io/projected/663aee99-c55e-45ba-b5ff-a67def0f524e-kube-api-access-zwn2l\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.067862 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-catalog-content\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.067905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-utilities\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.127880 4902 scope.go:117] "RemoveContainer" containerID="b0e5bd21eb45121c58536e203537cf733407f5259b12126eae2bd654f50021a5" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.169389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-catalog-content\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.169463 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-utilities\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.169799 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwn2l\" (UniqueName: \"kubernetes.io/projected/663aee99-c55e-45ba-b5ff-a67def0f524e-kube-api-access-zwn2l\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.170331 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-utilities\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.170372 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/663aee99-c55e-45ba-b5ff-a67def0f524e-catalog-content\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.188570 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwn2l\" (UniqueName: \"kubernetes.io/projected/663aee99-c55e-45ba-b5ff-a67def0f524e-kube-api-access-zwn2l\") pod \"redhat-marketplace-ppndl\" (UID: \"663aee99-c55e-45ba-b5ff-a67def0f524e\") " pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.279218 4902 scope.go:117] "RemoveContainer" containerID="b780515dde8ccd794f02bd3dc6005c6baf519de90ebd8e42d401146a27f9e971" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.299593 4902 scope.go:117] "RemoveContainer" containerID="dafc51f1d7f9ba142fe8d6c07ed9585d44e582c8f889e035fd698495242522fb" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.314166 4902 scope.go:117] "RemoveContainer" containerID="d24c33bc95b69d74e25ab4cc6e01c313a3384aa99b80fde04e7056ebb32a4780" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.332097 4902 scope.go:117] "RemoveContainer" containerID="4e686b959372288a5668349b284ecd38a38ea795d787fa0d477db1901cf9976c" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.369531 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.526361 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.589439 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681433 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681511 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681557 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681702 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681782 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681806 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.681831 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") pod \"64242610-9b91-49bc-9400-12298973aad0\" (UID: \"64242610-9b91-49bc-9400-12298973aad0\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.682993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca" (OuterVolumeSpecName: "client-ca") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.683169 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config" (OuterVolumeSpecName: "config") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.683465 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca" (OuterVolumeSpecName: "client-ca") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.684873 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj" (OuterVolumeSpecName: "kube-api-access-wwrwj") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "kube-api-access-wwrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.684869 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.685441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.686379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p" (OuterVolumeSpecName: "kube-api-access-5xh6p") pod "64242610-9b91-49bc-9400-12298973aad0" (UID: "64242610-9b91-49bc-9400-12298973aad0"). InnerVolumeSpecName "kube-api-access-5xh6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.769362 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.769715 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.769768 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.771839 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.771911 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388" gracePeriod=600 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.782908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.782962 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") pod \"7db6f852-8480-412f-a9bf-9afd18c41d83\" (UID: \"7db6f852-8480-412f-a9bf-9afd18c41d83\") " Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783182 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7db6f852-8480-412f-a9bf-9afd18c41d83-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783203 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783320 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783454 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64242610-9b91-49bc-9400-12298973aad0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783472 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xh6p\" (UniqueName: \"kubernetes.io/projected/64242610-9b91-49bc-9400-12298973aad0-kube-api-access-5xh6p\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783483 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64242610-9b91-49bc-9400-12298973aad0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783492 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrwj\" (UniqueName: \"kubernetes.io/projected/7db6f852-8480-412f-a9bf-9afd18c41d83-kube-api-access-wwrwj\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.783553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.784680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config" (OuterVolumeSpecName: "config") pod "7db6f852-8480-412f-a9bf-9afd18c41d83" (UID: "7db6f852-8480-412f-a9bf-9afd18c41d83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.796626 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppndl"] Jan 21 14:37:47 crc kubenswrapper[4902]: W0121 14:37:47.803600 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod663aee99_c55e_45ba_b5ff_a67def0f524e.slice/crio-449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9 WatchSource:0}: Error finding container 449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9: Status 404 returned error can't find the container with id 449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.884231 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.884262 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7db6f852-8480-412f-a9bf-9afd18c41d83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.901925 4902 generic.go:334] "Generic (PLEG): container finished" podID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902011 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerDied","Data":"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" event={"ID":"7db6f852-8480-412f-a9bf-9afd18c41d83","Type":"ContainerDied","Data":"1430f5f7f582f37ed906dc3199394088ef56b752688bdfa0d7374651d056e2d3"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902082 4902 scope.go:117] "RemoveContainer" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.902202 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758f874869-2jb7w" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.907482 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerStarted","Data":"449380a51ff98996b67beeb5691ab577c4dbce6123ccc219a52b4709312762a9"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909701 4902 generic.go:334] "Generic (PLEG): container finished" podID="64242610-9b91-49bc-9400-12298973aad0" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerDied","Data":"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909845 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" event={"ID":"64242610-9b91-49bc-9400-12298973aad0","Type":"ContainerDied","Data":"471419efc12598c3a121f35256027b6584df5df609cdadc31ab16086370f4330"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.909911 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.916926 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" event={"ID":"021a0823-715d-4b67-b5b2-b52ec6d6c7e8","Type":"ContainerStarted","Data":"3ceffc7e30beda18fa51d24c5f70afc752b14a0351e8a78296b1dfa51bb8f1e8"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.918091 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.924349 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.925606 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.925641 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388"} Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.942699 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-z4vkp" podStartSLOduration=4.942679047 podStartE2EDuration="4.942679047s" podCreationTimestamp="2026-01-21 14:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:47.938615648 +0000 UTC m=+230.015448687" watchObservedRunningTime="2026-01-21 14:37:47.942679047 +0000 UTC m=+230.019512076" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.946933 4902 scope.go:117] "RemoveContainer" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.947685 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e\": container with ID starting with 7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e not found: ID does not exist" containerID="7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.947743 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e"} err="failed to get container status \"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e\": rpc error: code = NotFound desc = could not find container \"7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e\": container with ID starting with 7b8f58b172829fe6a47c9831cc38555b4edd2c569a38dc592610fc39c5c67c1e not found: ID does not exist" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.947773 4902 scope.go:117] "RemoveContainer" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.968455 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.975136 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677957ff68-7pb2f"] Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.988001 4902 scope.go:117] "RemoveContainer" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" Jan 21 14:37:47 crc kubenswrapper[4902]: E0121 14:37:47.988444 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152\": container with ID starting with 763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152 not found: ID does not exist" containerID="763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152" Jan 21 14:37:47 crc kubenswrapper[4902]: I0121 14:37:47.988477 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152"} err="failed to get container status \"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152\": rpc error: code = NotFound desc = could not find container \"763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152\": container with ID starting with 763cf8101125c3326eb8863427dba48a202915d2b63b63b2c61e3a3ea8120152 not found: ID does not exist" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.057907 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.061789 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-758f874869-2jb7w"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065019 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kplb"] Jan 21 14:37:48 crc kubenswrapper[4902]: E0121 14:37:48.065349 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065371 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: E0121 14:37:48.065390 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065396 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065501 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" containerName="controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.065517 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="64242610-9b91-49bc-9400-12298973aad0" containerName="route-controller-manager" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.066936 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.067093 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kplb"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.079323 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.087512 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-catalog-content\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.087608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-utilities\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.087689 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8vn\" (UniqueName: \"kubernetes.io/projected/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-kube-api-access-zd8vn\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.190230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-catalog-content\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.190329 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-utilities\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.190383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8vn\" (UniqueName: \"kubernetes.io/projected/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-kube-api-access-zd8vn\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.191115 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-catalog-content\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.191251 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-utilities\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.212710 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8vn\" (UniqueName: \"kubernetes.io/projected/fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c-kube-api-access-zd8vn\") pod \"redhat-operators-8kplb\" (UID: \"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c\") " pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.303623 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64242610-9b91-49bc-9400-12298973aad0" path="/var/lib/kubelet/pods/64242610-9b91-49bc-9400-12298973aad0/volumes" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.304799 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db6f852-8480-412f-a9bf-9afd18c41d83" path="/var/lib/kubelet/pods/7db6f852-8480-412f-a9bf-9afd18c41d83/volumes" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.401533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.773969 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.775208 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.776997 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.777543 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778422 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778562 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.778977 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.779013 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.779350 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.786208 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.786771 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.787461 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.787530 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.790852 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.790911 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.790861 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.795272 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.798768 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.798819 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.798854 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799281 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799317 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799376 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.799489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.801340 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.858619 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kplb"] Jan 21 14:37:48 crc kubenswrapper[4902]: W0121 14:37:48.869794 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfddf72a9_e04a_41e1_9f81_f41a8d7b8d9c.slice/crio-b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c WatchSource:0}: Error finding container b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c: Status 404 returned error can't find the container with id b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901347 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901438 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901488 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901555 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901588 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.901635 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.903401 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.904594 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.905227 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.906409 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.907717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.909444 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.909483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.918356 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"controller-manager-6466b9bcb-hkw2b\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.922824 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"route-controller-manager-668f58b975-nxpbh\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.937554 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9"} Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.939964 4902 generic.go:334] "Generic (PLEG): container finished" podID="663aee99-c55e-45ba-b5ff-a67def0f524e" containerID="f958a13bdb6e7904db7ec1bc74b10c95e2b8e2273a9b808890869ef5f622d459" exitCode=0 Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.940019 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerDied","Data":"f958a13bdb6e7904db7ec1bc74b10c95e2b8e2273a9b808890869ef5f622d459"} Jan 21 14:37:48 crc kubenswrapper[4902]: I0121 14:37:48.942894 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerStarted","Data":"b4d5651e7aed1bfe84b4e1a0469e35bf65cfc9d795aac21f972695ff9172fc3c"} Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.103566 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.116171 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.444376 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.446610 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.449965 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.456030 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.507983 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:37:49 crc kubenswrapper[4902]: W0121 14:37:49.514669 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb38b0db_02f2_4797_831b_baadb29db220.slice/crio-a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557 WatchSource:0}: Error finding container a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557: Status 404 returned error can't find the container with id a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557 Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.553893 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:37:49 crc kubenswrapper[4902]: W0121 14:37:49.571973 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda79a8460_e7c3_4c10_b5b9_6626715eb24a.slice/crio-35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d WatchSource:0}: Error finding container 35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d: Status 404 returned error can't find the container with id 35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.610079 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.610175 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.610241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.710574 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.710633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.710676 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.711153 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.711379 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.730058 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"community-operators-wx2t6\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.767762 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.972088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerStarted","Data":"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd"} Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.972422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerStarted","Data":"a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557"} Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.973986 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.986567 4902 generic.go:334] "Generic (PLEG): container finished" podID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" containerID="b6d7d5d162dd8020dab67d2fe23d24005012a29169bdb88e5ec54c6cf61b4929" exitCode=0 Jan 21 14:37:49 crc kubenswrapper[4902]: I0121 14:37:49.986678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerDied","Data":"b6d7d5d162dd8020dab67d2fe23d24005012a29169bdb88e5ec54c6cf61b4929"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.002410 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerStarted","Data":"7de311315f7941125515d7510324423471ae52ea94f170c56e695237656c2e2a"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.009113 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.012174 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" podStartSLOduration=4.012150293 podStartE2EDuration="4.012150293s" podCreationTimestamp="2026-01-21 14:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:50.000802895 +0000 UTC m=+232.077635934" watchObservedRunningTime="2026-01-21 14:37:50.012150293 +0000 UTC m=+232.088983332" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.013550 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerStarted","Data":"d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.013592 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerStarted","Data":"35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d"} Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.013610 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.020909 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.058729 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" podStartSLOduration=4.058714207 podStartE2EDuration="4.058714207s" podCreationTimestamp="2026-01-21 14:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:50.057944931 +0000 UTC m=+232.134777960" watchObservedRunningTime="2026-01-21 14:37:50.058714207 +0000 UTC m=+232.135547236" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.398672 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.442478 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.443749 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.497141 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.509921 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.527924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.527969 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.527985 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.628540 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.628589 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.628605 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.629167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.629219 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.647263 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"certified-operators-26g5j\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:50 crc kubenswrapper[4902]: I0121 14:37:50.817235 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.018224 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"75dbfffe1a292d59aebf0dda1372b5bf1cb539e9684f4315cb02199044a5774e"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.018030 4902 generic.go:334] "Generic (PLEG): container finished" podID="a1458bec-2134-4eb6-8510-ece2a6568215" containerID="75dbfffe1a292d59aebf0dda1372b5bf1cb539e9684f4315cb02199044a5774e" exitCode=0 Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.018326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerStarted","Data":"0c33f9b7fd46d05c8e52b7ed0e8c0e3ee3e633992cb415fa75bef4908ef2fa1f"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.021712 4902 generic.go:334] "Generic (PLEG): container finished" podID="663aee99-c55e-45ba-b5ff-a67def0f524e" containerID="7de311315f7941125515d7510324423471ae52ea94f170c56e695237656c2e2a" exitCode=0 Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.021841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerDied","Data":"7de311315f7941125515d7510324423471ae52ea94f170c56e695237656c2e2a"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.030572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerStarted","Data":"32933455214f818d252eed3ceaaa0d4a7d2f4fa096127a0a72f35ba55e453be2"} Jan 21 14:37:51 crc kubenswrapper[4902]: I0121 14:37:51.223988 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 14:37:51 crc kubenswrapper[4902]: W0121 14:37:51.233547 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9904001f_3d1f_494d_bfb6_5baa56f45c7b.slice/crio-739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6 WatchSource:0}: Error finding container 739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6: Status 404 returned error can't find the container with id 739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.037345 4902 generic.go:334] "Generic (PLEG): container finished" podID="a1458bec-2134-4eb6-8510-ece2a6568215" containerID="43adeb973bdbf05aa4340e69a147ab41031881fc3cf5bd920322ca643738ff13" exitCode=0 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.037448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"43adeb973bdbf05aa4340e69a147ab41031881fc3cf5bd920322ca643738ff13"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.041987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppndl" event={"ID":"663aee99-c55e-45ba-b5ff-a67def0f524e","Type":"ContainerStarted","Data":"be4c066623f5d96b397cf3b197cd7394822280faa40315013d520181b2fe0bad"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.043886 4902 generic.go:334] "Generic (PLEG): container finished" podID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" containerID="32933455214f818d252eed3ceaaa0d4a7d2f4fa096127a0a72f35ba55e453be2" exitCode=0 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.043941 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerDied","Data":"32933455214f818d252eed3ceaaa0d4a7d2f4fa096127a0a72f35ba55e453be2"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.045679 4902 generic.go:334] "Generic (PLEG): container finished" podID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" exitCode=0 Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.046452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.046473 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerStarted","Data":"739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6"} Jan 21 14:37:52 crc kubenswrapper[4902]: I0121 14:37:52.097886 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ppndl" podStartSLOduration=2.538808746 podStartE2EDuration="5.097847374s" podCreationTimestamp="2026-01-21 14:37:47 +0000 UTC" firstStartedPulling="2026-01-21 14:37:48.941441574 +0000 UTC m=+231.018274603" lastFinishedPulling="2026-01-21 14:37:51.500480202 +0000 UTC m=+233.577313231" observedRunningTime="2026-01-21 14:37:52.095475193 +0000 UTC m=+234.172308222" watchObservedRunningTime="2026-01-21 14:37:52.097847374 +0000 UTC m=+234.174680403" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.052355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerStarted","Data":"f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b"} Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.055666 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kplb" event={"ID":"fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c","Type":"ContainerStarted","Data":"d0bb85ff115f923a7208278f2b5eb58c0438b0731d1a5a61a24b3e079aff5c99"} Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.057568 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerStarted","Data":"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7"} Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.080464 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wx2t6" podStartSLOduration=2.63394158 podStartE2EDuration="4.080441216s" podCreationTimestamp="2026-01-21 14:37:49 +0000 UTC" firstStartedPulling="2026-01-21 14:37:51.021494422 +0000 UTC m=+233.098327451" lastFinishedPulling="2026-01-21 14:37:52.467994058 +0000 UTC m=+234.544827087" observedRunningTime="2026-01-21 14:37:53.077221986 +0000 UTC m=+235.154055015" watchObservedRunningTime="2026-01-21 14:37:53.080441216 +0000 UTC m=+235.157274245" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.113917 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kplb" podStartSLOduration=2.592185592 podStartE2EDuration="5.113901662s" podCreationTimestamp="2026-01-21 14:37:48 +0000 UTC" firstStartedPulling="2026-01-21 14:37:49.988510804 +0000 UTC m=+232.065343833" lastFinishedPulling="2026-01-21 14:37:52.510226874 +0000 UTC m=+234.587059903" observedRunningTime="2026-01-21 14:37:53.11179803 +0000 UTC m=+235.188631059" watchObservedRunningTime="2026-01-21 14:37:53.113901662 +0000 UTC m=+235.190734691" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.891239 4902 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893014 4902 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893504 4902 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893653 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893852 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893944 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.894006 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893587 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.893507 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" gracePeriod=15 Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.894486 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897179 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897221 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897232 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897267 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897277 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897305 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897315 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897326 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897340 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897364 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:37:53 crc kubenswrapper[4902]: E0121 14:37:53.897379 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897387 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897681 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897699 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897718 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897733 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897743 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.897760 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.904918 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 14:37:53 crc kubenswrapper[4902]: I0121 14:37:53.943977 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.066256 4902 generic.go:334] "Generic (PLEG): container finished" podID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.066342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7"} Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.067677 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.067991 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: E0121 14:37:54.070122 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-26g5j.188cc5d56b5d789b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-26g5j,UID:9904001f-3d1f-494d-bfb6-5baa56f45c7b,APIVersion:v1,ResourceVersion:29868,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,LastTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.070593 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.071919 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072795 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072824 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072835 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" exitCode=0 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072846 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" exitCode=2 Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.072901 4902 scope.go:117] "RemoveContainer" containerID="35f8c907d24b0f8b516fa0a82b6f64955e2430516a09a043bf17657519c60f02" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.091895 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.091964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092001 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092020 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092092 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092119 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.092135 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.173939 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.174504 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.174662 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.174806 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193723 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193827 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193880 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193901 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193957 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.193980 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.194108 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.194158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195445 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195475 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195504 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.195536 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.196154 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: E0121 14:37:54.198241 4902 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" volumeName="registry-storage" Jan 21 14:37:54 crc kubenswrapper[4902]: I0121 14:37:54.239205 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:37:54 crc kubenswrapper[4902]: W0121 14:37:54.257996 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc WatchSource:0}: Error finding container 61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc: Status 404 returned error can't find the container with id 61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.084850 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78"} Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.085724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"61900715ef2ada5169683220999783176e3eed57a9af374f4d6d527712b13ddc"} Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.092487 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.095864 4902 generic.go:334] "Generic (PLEG): container finished" podID="84af95e1-2275-49b2-987c-afa33fb32734" containerID="ce71892c9cc4a5eca454b5acdd2876bc8fdf1542a231264709d1d8546488cc23" exitCode=0 Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.095922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerDied","Data":"ce71892c9cc4a5eca454b5acdd2876bc8fdf1542a231264709d1d8546488cc23"} Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.098944 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.099338 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.099527 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:55 crc kubenswrapper[4902]: I0121 14:37:55.099711 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.105096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerStarted","Data":"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6"} Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.109121 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.109685 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.110105 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.110350 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.110761 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.111190 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.111669 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.111888 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.823848 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:37:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.824945 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825393 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825679 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825949 4902 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: E0121 14:37:56.825974 4902 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.870641 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.871494 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.871988 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.872326 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.872687 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.872933 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.873271 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.877181 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.877745 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.878106 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.878598 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.878857 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:56 crc kubenswrapper[4902]: I0121 14:37:56.879174 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037603 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037631 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") pod \"84af95e1-2275-49b2-987c-afa33fb32734\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") pod \"84af95e1-2275-49b2-987c-afa33fb32734\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84af95e1-2275-49b2-987c-afa33fb32734" (UID: "84af95e1-2275-49b2-987c-afa33fb32734"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037762 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037791 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") pod \"84af95e1-2275-49b2-987c-afa33fb32734\" (UID: \"84af95e1-2275-49b2-987c-afa33fb32734\") " Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.037908 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock" (OuterVolumeSpecName: "var-lock") pod "84af95e1-2275-49b2-987c-afa33fb32734" (UID: "84af95e1-2275-49b2-987c-afa33fb32734"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038266 4902 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038285 4902 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038294 4902 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038301 4902 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.038309 4902 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84af95e1-2275-49b2-987c-afa33fb32734-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.044134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84af95e1-2275-49b2-987c-afa33fb32734" (UID: "84af95e1-2275-49b2-987c-afa33fb32734"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.116649 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84af95e1-2275-49b2-987c-afa33fb32734","Type":"ContainerDied","Data":"1e0e3b99d3e199bf0a5109aed2aaf9e421c0eab2e9ccba48ebac5e8687fa5207"} Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.116690 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e0e3b99d3e199bf0a5109aed2aaf9e421c0eab2e9ccba48ebac5e8687fa5207" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.116995 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.120383 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.120938 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" exitCode=0 Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.122403 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.126335 4902 scope.go:117] "RemoveContainer" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142122 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142320 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142498 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142673 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.142847 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.148203 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84af95e1-2275-49b2-987c-afa33fb32734-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.152757 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.153238 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.153869 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.153886 4902 scope.go:117] "RemoveContainer" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.154166 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.154714 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.170892 4902 scope.go:117] "RemoveContainer" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.189166 4902 scope.go:117] "RemoveContainer" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.209758 4902 scope.go:117] "RemoveContainer" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.226499 4902 scope.go:117] "RemoveContainer" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.253065 4902 scope.go:117] "RemoveContainer" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.253488 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\": container with ID starting with 12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405 not found: ID does not exist" containerID="12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.253530 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405"} err="failed to get container status \"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\": rpc error: code = NotFound desc = could not find container \"12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405\": container with ID starting with 12775d9a88afdffa3b4fad4d6374266e2f403550edd66b483f93f7e827659405 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.253563 4902 scope.go:117] "RemoveContainer" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.254136 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\": container with ID starting with 3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c not found: ID does not exist" containerID="3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.254201 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c"} err="failed to get container status \"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\": rpc error: code = NotFound desc = could not find container \"3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c\": container with ID starting with 3956412b7395f216dcd7e11422567617c3bbddf46672ec60680e837e28813b1c not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.254229 4902 scope.go:117] "RemoveContainer" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.255684 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\": container with ID starting with 0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9 not found: ID does not exist" containerID="0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.255716 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9"} err="failed to get container status \"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\": rpc error: code = NotFound desc = could not find container \"0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9\": container with ID starting with 0ea0b5580a747ad8b4ef41ac4a353e5bc5ffa591a4e9e619e2ed1fc923f89ad9 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.255745 4902 scope.go:117] "RemoveContainer" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.256232 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\": container with ID starting with d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2 not found: ID does not exist" containerID="d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256259 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2"} err="failed to get container status \"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\": rpc error: code = NotFound desc = could not find container \"d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2\": container with ID starting with d3c6831594d337b815087f4dbf5dd3197141e02621f79a06148d821fb703d4b2 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256276 4902 scope.go:117] "RemoveContainer" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.256537 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\": container with ID starting with 56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551 not found: ID does not exist" containerID="56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256557 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551"} err="failed to get container status \"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\": rpc error: code = NotFound desc = could not find container \"56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551\": container with ID starting with 56e7813b7f20540f440c3f15d7f242d9891a1737e33f958f70b9f638d39ac551 not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256570 4902 scope.go:117] "RemoveContainer" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" Jan 21 14:37:57 crc kubenswrapper[4902]: E0121 14:37:57.256817 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\": container with ID starting with 3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e not found: ID does not exist" containerID="3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.256833 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e"} err="failed to get container status \"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\": rpc error: code = NotFound desc = could not find container \"3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e\": container with ID starting with 3d9739804a0d78d06314c2d3793abf1e99bf1b981d8be57d4e37bae5caffce9e not found: ID does not exist" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.370078 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.370118 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.432587 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.433817 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.434431 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.435008 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.435664 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.436861 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:57 crc kubenswrapper[4902]: I0121 14:37:57.437151 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.162930 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ppndl" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.163926 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164361 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164573 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164723 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.164872 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.165010 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.300792 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.301619 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302074 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302301 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302489 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.302720 4902 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.311123 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.401743 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.401789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.440648 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.441337 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.441735 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442026 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442308 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442520 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:58 crc kubenswrapper[4902]: I0121 14:37:58.442820 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.172963 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kplb" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.173997 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.174266 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.174618 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.175274 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.175658 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.176163 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.336856 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.337257 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.337548 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.337915 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.338226 4902 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.338361 4902 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.338706 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="200ms" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.539767 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="400ms" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.768398 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.768463 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.804840 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.805269 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.805660 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.806214 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.806639 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.806914 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.807283 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: I0121 14:37:59.807639 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:37:59 crc kubenswrapper[4902]: E0121 14:37:59.940722 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="800ms" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178088 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178487 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178776 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.178980 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179189 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179433 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179696 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.179932 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: E0121 14:38:00.743269 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="1.6s" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.818730 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.818813 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.857379 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.857875 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858195 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858433 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858695 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.858964 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.859222 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:00 crc kubenswrapper[4902]: I0121 14:38:00.859467 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.177184 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.177799 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.178408 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.178887 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.179192 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.179476 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.179769 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:01 crc kubenswrapper[4902]: I0121 14:38:01.180026 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:02 crc kubenswrapper[4902]: E0121 14:38:02.008445 4902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.21:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-26g5j.188cc5d56b5d789b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-26g5j,UID:9904001f-3d1f-494d-bfb6-5baa56f45c7b,APIVersion:v1,ResourceVersion:29868,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,LastTimestamp:2026-01-21 14:37:54.069756059 +0000 UTC m=+236.146589088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:38:02 crc kubenswrapper[4902]: E0121 14:38:02.344850 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="3.2s" Jan 21 14:38:05 crc kubenswrapper[4902]: E0121 14:38:05.545678 4902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.21:6443: connect: connection refused" interval="6.4s" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.294513 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.295651 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296152 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296460 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296695 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.296894 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.297099 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.297292 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.308020 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.308063 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:07 crc kubenswrapper[4902]: E0121 14:38:07.308381 4902 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:07 crc kubenswrapper[4902]: I0121 14:38:07.308794 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175340 4902 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dc58673a1dc1631e428ca61fa990459af44227104c602aee2effaba0e45ffddf" exitCode=0 Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175420 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dc58673a1dc1631e428ca61fa990459af44227104c602aee2effaba0e45ffddf"} Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64203db93c1e54e1cf79bbfe8881d127e48e32a2911c68da90eca6a89cc36ee3"} Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175912 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.175932 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:08 crc kubenswrapper[4902]: E0121 14:38:08.176380 4902 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.176434 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.176931 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.177172 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.177347 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.177602 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.178016 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.178454 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.178884 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179016 4902 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2" exitCode=1 Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179062 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2"} Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179462 4902 scope.go:117] "RemoveContainer" containerID="9469f736f537edd7d839c1c4de4c2859809f4365103d11e51897b32a02829be2" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.179989 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.180382 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.180610 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.180836 4902 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.181134 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.181376 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.181670 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.182186 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.303426 4902 status_manager.go:851] "Failed to get status for pod" podUID="663aee99-c55e-45ba-b5ff-a67def0f524e" pod="openshift-marketplace/redhat-marketplace-ppndl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppndl\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.304082 4902 status_manager.go:851] "Failed to get status for pod" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" pod="openshift-marketplace/community-operators-wx2t6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wx2t6\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.304595 4902 status_manager.go:851] "Failed to get status for pod" podUID="84af95e1-2275-49b2-987c-afa33fb32734" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.304894 4902 status_manager.go:851] "Failed to get status for pod" podUID="fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c" pod="openshift-marketplace/redhat-operators-8kplb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8kplb\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305105 4902 status_manager.go:851] "Failed to get status for pod" podUID="bb2b422b-c8b3-48ec-901a-e9da16f653fa" pod="openshift-image-registry/image-registry-66df7c8f76-gzb8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-gzb8l\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305292 4902 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305476 4902 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305699 4902 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.305891 4902 status_manager.go:851] "Failed to get status for pod" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" pod="openshift-marketplace/certified-operators-26g5j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-26g5j\": dial tcp 38.129.56.21:6443: connect: connection refused" Jan 21 14:38:08 crc kubenswrapper[4902]: I0121 14:38:08.766471 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.190670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dff9c1202671af0b0a361747134a8092475701e7caa07ad1784ddcd6da6be2fe"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.190990 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"823188dc8f43c7423093c090374ff8be58e4e79e28b7a02a3a9d30349f9c9693"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.191005 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c3482f1a841ee8066b99cd19499dcd169de10e84763f4402a7f79cf751954b1"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.191015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ad5166b668349af6575f85f6b6d5d5594bcc1811143eb1c97afcfd05ef5d83c6"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.195525 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.195587 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f38e90723c687ad61c1b4f3fc03a3c99070f6e5ce4450df78af77e8fb2cd34c3"} Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.273563 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" containerID="cri-o://38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" gracePeriod=15 Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.726496 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.842775 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843132 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843176 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843196 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.842926 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843216 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843380 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843471 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843527 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843548 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843604 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843647 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") pod \"0c16a673-e56a-49ff-ac34-6910e02214a6\" (UID: \"0c16a673-e56a-49ff-ac34-6910e02214a6\") " Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843838 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.843961 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844269 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844302 4902 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844325 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.844706 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.845018 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.867814 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.868128 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.872437 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45" (OuterVolumeSpecName: "kube-api-access-v4l45") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "kube-api-access-v4l45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.875265 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.875626 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.879801 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.880142 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.881409 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.881836 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0c16a673-e56a-49ff-ac34-6910e02214a6" (UID: "0c16a673-e56a-49ff-ac34-6910e02214a6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945540 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945576 4902 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945586 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945596 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945605 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945615 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945625 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945636 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945646 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4l45\" (UniqueName: \"kubernetes.io/projected/0c16a673-e56a-49ff-ac34-6910e02214a6-kube-api-access-v4l45\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945655 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:09 crc kubenswrapper[4902]: I0121 14:38:09.945665 4902 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c16a673-e56a-49ff-ac34-6910e02214a6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd3eb8aa9e3640a1b39ba0a20e8ed265c0e4eb9a3df867da8f6365840f2fb53b"} Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205397 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205416 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.205672 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.207241 4902 generic.go:334] "Generic (PLEG): container finished" podID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" exitCode=0 Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.208143 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.211100 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerDied","Data":"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350"} Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.211134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-n2xzb" event={"ID":"0c16a673-e56a-49ff-ac34-6910e02214a6","Type":"ContainerDied","Data":"7b9eaa6ff12a7628df3550e4b5486c4dd30838dd795331af359c3d19256bdd60"} Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.211151 4902 scope.go:117] "RemoveContainer" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.226760 4902 scope.go:117] "RemoveContainer" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" Jan 21 14:38:10 crc kubenswrapper[4902]: E0121 14:38:10.227201 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350\": container with ID starting with 38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350 not found: ID does not exist" containerID="38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350" Jan 21 14:38:10 crc kubenswrapper[4902]: I0121 14:38:10.227250 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350"} err="failed to get container status \"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350\": rpc error: code = NotFound desc = could not find container \"38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350\": container with ID starting with 38810a3e1d798bdb32aa1f729a2804223d6edacb9fd7fec0bfeeaa64fed77350 not found: ID does not exist" Jan 21 14:38:12 crc kubenswrapper[4902]: I0121 14:38:12.330476 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:12 crc kubenswrapper[4902]: I0121 14:38:12.330891 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:12 crc kubenswrapper[4902]: I0121 14:38:12.330915 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:13 crc kubenswrapper[4902]: I0121 14:38:13.483216 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:13 crc kubenswrapper[4902]: I0121 14:38:13.487406 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:13 crc kubenswrapper[4902]: I0121 14:38:13.579737 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.211738 4902 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.214929 4902 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8af838db-be80-416e-baea-f302db74939c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad5166b668349af6575f85f6b6d5d5594bcc1811143eb1c97afcfd05ef5d83c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://823188dc8f43c7423093c090374ff8be58e4e79e28b7a02a3a9d30349f9c9693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3482f1a841ee8066b99cd19499dcd169de10e84763f4402a7f79cf751954b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd3eb8aa9e3640a1b39ba0a20e8ed265c0e4eb9a3df867da8f6365840f2fb53b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff9c1202671af0b0a361747134a8092475701e7caa07ad1784ddcd6da6be2fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:38:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.233886 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.233920 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.237427 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:15 crc kubenswrapper[4902]: I0121 14:38:15.243065 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1dd7ed5c-b84b-483e-a79c-dd31f29665ca" Jan 21 14:38:16 crc kubenswrapper[4902]: I0121 14:38:16.238474 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:16 crc kubenswrapper[4902]: I0121 14:38:16.239642 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:18 crc kubenswrapper[4902]: I0121 14:38:18.317146 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1dd7ed5c-b84b-483e-a79c-dd31f29665ca" Jan 21 14:38:23 crc kubenswrapper[4902]: I0121 14:38:23.586306 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:38:24 crc kubenswrapper[4902]: I0121 14:38:24.549702 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 14:38:25 crc kubenswrapper[4902]: I0121 14:38:25.313423 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 14:38:25 crc kubenswrapper[4902]: I0121 14:38:25.463921 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 14:38:25 crc kubenswrapper[4902]: I0121 14:38:25.846723 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.782439 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.782790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.783016 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.787313 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.842004 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:38:26 crc kubenswrapper[4902]: I0121 14:38:26.896344 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.095738 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.159865 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.169833 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.374956 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.401679 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.484222 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.543699 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.593940 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.829179 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.835487 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.849315 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.866520 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.943190 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.968317 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 14:38:27 crc kubenswrapper[4902]: I0121 14:38:27.984499 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.044511 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.061286 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.067207 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.076108 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.093321 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.209572 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.487737 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.552878 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.772969 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.834085 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.841236 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.884765 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.923773 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.967757 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 14:38:28 crc kubenswrapper[4902]: I0121 14:38:28.997357 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.011549 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.031937 4902 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.140309 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.145896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.195225 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.294096 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.314098 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.477079 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.488340 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.528597 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.567262 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.581960 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.588224 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.678491 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.705749 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.764880 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.776441 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.867674 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.898580 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.946844 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.969671 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 14:38:29 crc kubenswrapper[4902]: I0121 14:38:29.986483 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.038532 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.070018 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.091145 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.131895 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.309811 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.347867 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.362166 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.367099 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.504095 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.584729 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.743619 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.901065 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 14:38:30 crc kubenswrapper[4902]: I0121 14:38:30.948475 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.068973 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.071209 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.193572 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.225889 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.285242 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.506499 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.525772 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.622178 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.710265 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.724572 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.778899 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.812632 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.825573 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.911871 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 14:38:31 crc kubenswrapper[4902]: I0121 14:38:31.990772 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.035413 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.061269 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.076176 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.155642 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.304454 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.529393 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.698101 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.832639 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.860900 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.914685 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.931008 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 14:38:32 crc kubenswrapper[4902]: I0121 14:38:32.971444 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.033608 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.199506 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.206771 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.270154 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.370461 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.388491 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.394180 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.619644 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.798329 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.845922 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.892037 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.923256 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.931743 4902 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.932944 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.932928018 podStartE2EDuration="40.932928018s" podCreationTimestamp="2026-01-21 14:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:15.09741177 +0000 UTC m=+257.174244799" watchObservedRunningTime="2026-01-21 14:38:33.932928018 +0000 UTC m=+276.009761047" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.934324 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-26g5j" podStartSLOduration=40.832240482 podStartE2EDuration="43.934316281s" podCreationTimestamp="2026-01-21 14:37:50 +0000 UTC" firstStartedPulling="2026-01-21 14:37:52.048226966 +0000 UTC m=+234.125059995" lastFinishedPulling="2026-01-21 14:37:55.150302765 +0000 UTC m=+237.227135794" observedRunningTime="2026-01-21 14:38:15.134447368 +0000 UTC m=+257.211280397" watchObservedRunningTime="2026-01-21 14:38:33.934316281 +0000 UTC m=+276.011149320" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.936831 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-n2xzb","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.936891 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-984c8fd85-7vnz7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:38:33 crc kubenswrapper[4902]: E0121 14:38:33.937117 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84af95e1-2275-49b2-987c-afa33fb32734" containerName="installer" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937140 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="84af95e1-2275-49b2-987c-afa33fb32734" containerName="installer" Jan 21 14:38:33 crc kubenswrapper[4902]: E0121 14:38:33.937164 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937174 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937436 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" containerName="oauth-openshift" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937461 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="84af95e1-2275-49b2-987c-afa33fb32734" containerName="installer" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937464 4902 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937496 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8af838db-be80-416e-baea-f302db74939c" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.937913 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.940609 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941348 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941682 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941725 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941817 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941829 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.941867 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942058 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942095 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942325 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.942449 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.947221 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.948359 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.961030 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.965003 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.966990 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-session\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-dir\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973419 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973479 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-policies\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-service-ca\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-login\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973740 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-router-certs\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.973811 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974325 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nqs8\" (UniqueName: \"kubernetes.io/projected/59eebcd0-5352-4547-b84b-8de6538c7a03-kube-api-access-8nqs8\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974469 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974499 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-error\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.974555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:33 crc kubenswrapper[4902]: I0121 14:38:33.986699 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.98667697 podStartE2EDuration="18.98667697s" podCreationTimestamp="2026-01-21 14:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:33.98344012 +0000 UTC m=+276.060273149" watchObservedRunningTime="2026-01-21 14:38:33.98667697 +0000 UTC m=+276.063510019" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.010865 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.022539 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075238 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nqs8\" (UniqueName: \"kubernetes.io/projected/59eebcd0-5352-4547-b84b-8de6538c7a03-kube-api-access-8nqs8\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075291 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075363 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-error\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-session\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075439 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-dir\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075489 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075509 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-policies\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-service-ca\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075567 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-login\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075597 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-router-certs\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.075630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.076748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-dir\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077219 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-audit-policies\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077443 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.077873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-service-ca\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.081785 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-router-certs\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.081960 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082338 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-session\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.082780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-error\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.083534 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-user-template-login\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.086723 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/59eebcd0-5352-4547-b84b-8de6538c7a03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.092093 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.093587 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nqs8\" (UniqueName: \"kubernetes.io/projected/59eebcd0-5352-4547-b84b-8de6538c7a03-kube-api-access-8nqs8\") pod \"oauth-openshift-984c8fd85-7vnz7\" (UID: \"59eebcd0-5352-4547-b84b-8de6538c7a03\") " pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.131519 4902 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.149544 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.166871 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.237648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.259239 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.295944 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.301127 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c16a673-e56a-49ff-ac34-6910e02214a6" path="/var/lib/kubelet/pods/0c16a673-e56a-49ff-ac34-6910e02214a6/volumes" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.311469 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.354218 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.372289 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.380214 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.413950 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.421361 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.438936 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.452459 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.511499 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.530089 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.539263 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.542064 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.568059 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.707243 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.815228 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.916530 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 14:38:34 crc kubenswrapper[4902]: I0121 14:38:34.916628 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.068282 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.095009 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.108982 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.158616 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.252156 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.342914 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.445881 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.481058 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.552799 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.565967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.570813 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.580232 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.597311 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.622438 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.657020 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.699307 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.798766 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 14:38:35 crc kubenswrapper[4902]: I0121 14:38:35.973476 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-984c8fd85-7vnz7"] Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.104004 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.131813 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.216437 4902 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.285421 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.619804 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.663843 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.665884 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.852138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" event={"ID":"59eebcd0-5352-4547-b84b-8de6538c7a03","Type":"ContainerStarted","Data":"869c1f5dc9f4ec2f0178b825ba09a2f9f2fe20a5dc314ece6abcacfb7fd245c9"} Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.852205 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" event={"ID":"59eebcd0-5352-4547-b84b-8de6538c7a03","Type":"ContainerStarted","Data":"f20b329b2afcc59d2ad8385b89bb7a8933a00cb1bdad6a8d834a7cc51454aca7"} Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.853390 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.858392 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.874248 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-984c8fd85-7vnz7" podStartSLOduration=52.874228308 podStartE2EDuration="52.874228308s" podCreationTimestamp="2026-01-21 14:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:36.871925756 +0000 UTC m=+278.948758825" watchObservedRunningTime="2026-01-21 14:38:36.874228308 +0000 UTC m=+278.951061337" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.887535 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.888969 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 14:38:36 crc kubenswrapper[4902]: I0121 14:38:36.967237 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.005671 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.061073 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.132014 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.181371 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.275069 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.275103 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.275767 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.305793 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.331989 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.346915 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.480630 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.628510 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.656971 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.663358 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.702302 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.716608 4902 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.716877 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78" gracePeriod=5 Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.744694 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.782003 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.808499 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.817177 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.827936 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.926202 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.949384 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.960204 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 14:38:37 crc kubenswrapper[4902]: I0121 14:38:37.998730 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.022823 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.095347 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.118801 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.397663 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.404788 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.442436 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.516009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.650477 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.749486 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.770494 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.777659 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.796938 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 14:38:38 crc kubenswrapper[4902]: I0121 14:38:38.843012 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.004790 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.016237 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.038535 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.094719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.314288 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.351549 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.429134 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.429515 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.505493 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.545125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.545942 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.630659 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.717462 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.841466 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.934986 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.935220 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:38:39 crc kubenswrapper[4902]: I0121 14:38:39.947728 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.038521 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.050734 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.320873 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.360304 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.528783 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.597684 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.613036 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.847513 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:38:40 crc kubenswrapper[4902]: I0121 14:38:40.912086 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.065315 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.425884 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.455168 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.536269 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 14:38:41 crc kubenswrapper[4902]: I0121 14:38:41.782887 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 14:38:42 crc kubenswrapper[4902]: I0121 14:38:42.158061 4902 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 14:38:42 crc kubenswrapper[4902]: I0121 14:38:42.892606 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:38:42 crc kubenswrapper[4902]: I0121 14:38:42.892912 4902 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78" exitCode=137 Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.286682 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.286753 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403172 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403224 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403303 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403308 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403362 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403394 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403450 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403499 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403669 4902 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403690 4902 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403700 4902 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.403710 4902 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.412658 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.474771 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.504531 4902 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.899609 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.899691 4902 scope.go:117] "RemoveContainer" containerID="ba32cd74229e0518ced5be5a05249085a9351531d73703f33e7dd49b6eafbb78" Jan 21 14:38:43 crc kubenswrapper[4902]: I0121 14:38:43.899781 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.301649 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.302345 4902 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.312671 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.312720 4902 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6611278e-a0c8-4ecd-a30f-18efb57ba215" Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.316070 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:38:44 crc kubenswrapper[4902]: I0121 14:38:44.316157 4902 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6611278e-a0c8-4ecd-a30f-18efb57ba215" Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.714868 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.715500 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" containerID="cri-o://d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa" gracePeriod=30 Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.816693 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.817004 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" containerID="cri-o://7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" gracePeriod=30 Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.920630 4902 generic.go:334] "Generic (PLEG): container finished" podID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerID="d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa" exitCode=0 Jan 21 14:38:46 crc kubenswrapper[4902]: I0121 14:38:46.920672 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerDied","Data":"d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.190879 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.196981 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361351 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361459 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361498 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361521 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361558 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361614 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") pod \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\" (UID: \"a79a8460-e7c3-4c10-b5b9-6626715eb24a\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361685 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.361726 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") pod \"cb38b0db-02f2-4797-831b-baadb29db220\" (UID: \"cb38b0db-02f2-4797-831b-baadb29db220\") " Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.362764 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.362817 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.363152 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config" (OuterVolumeSpecName: "config") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.363292 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.363310 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config" (OuterVolumeSpecName: "config") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.367881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.367963 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.368106 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj" (OuterVolumeSpecName: "kube-api-access-4qjlj") pod "cb38b0db-02f2-4797-831b-baadb29db220" (UID: "cb38b0db-02f2-4797-831b-baadb29db220"). InnerVolumeSpecName "kube-api-access-4qjlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.368913 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb" (OuterVolumeSpecName: "kube-api-access-7hwbb") pod "a79a8460-e7c3-4c10-b5b9-6626715eb24a" (UID: "a79a8460-e7c3-4c10-b5b9-6626715eb24a"). InnerVolumeSpecName "kube-api-access-7hwbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463748 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463800 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463808 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.463881 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb38b0db-02f2-4797-831b-baadb29db220-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464453 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb38b0db-02f2-4797-831b-baadb29db220-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464484 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79a8460-e7c3-4c10-b5b9-6626715eb24a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464496 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qjlj\" (UniqueName: \"kubernetes.io/projected/cb38b0db-02f2-4797-831b-baadb29db220-kube-api-access-4qjlj\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464508 4902 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a79a8460-e7c3-4c10-b5b9-6626715eb24a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.464517 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hwbb\" (UniqueName: \"kubernetes.io/projected/a79a8460-e7c3-4c10-b5b9-6626715eb24a-kube-api-access-7hwbb\") on node \"crc\" DevicePath \"\"" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.819684 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd978d84c-b5724"] Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.819970 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.819988 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.820011 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.820019 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.820036 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.820049 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.821516 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.821543 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb38b0db-02f2-4797-831b-baadb29db220" containerName="route-controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.821563 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" containerName="controller-manager" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.822274 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.836466 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd978d84c-b5724"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928658 4902 generic.go:334] "Generic (PLEG): container finished" podID="cb38b0db-02f2-4797-831b-baadb29db220" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" exitCode=0 Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928771 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerDied","Data":"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928825 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" event={"ID":"cb38b0db-02f2-4797-831b-baadb29db220","Type":"ContainerDied","Data":"a9e0a2e8240b1e4870419d402cb4655289e3ed04ceb3d2a54121480c8cb83557"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.928843 4902 scope.go:117] "RemoveContainer" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.929011 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.930453 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" event={"ID":"a79a8460-e7c3-4c10-b5b9-6626715eb24a","Type":"ContainerDied","Data":"35dad082b0eb1ecb464f03e9456b66ffeaa63c62d77bb1e4f7192a72abdf2c4d"} Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.930535 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b9bcb-hkw2b" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.945556 4902 scope.go:117] "RemoveContainer" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" Jan 21 14:38:47 crc kubenswrapper[4902]: E0121 14:38:47.946264 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd\": container with ID starting with 7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd not found: ID does not exist" containerID="7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.946307 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd"} err="failed to get container status \"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd\": rpc error: code = NotFound desc = could not find container \"7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd\": container with ID starting with 7ef47f6ddf9ebe68a286e6db741f790a47cfca4098d6767954bf2011e601abfd not found: ID does not exist" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.946336 4902 scope.go:117] "RemoveContainer" containerID="d7270dbcc770b97b85d9ccbb0214929ed8fa65fd7e0aee8a26b7893223ddebfa" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.965335 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.970920 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5srg5\" (UniqueName: \"kubernetes.io/projected/125eaa50-dfc7-4d81-8e49-28c62e080939-kube-api-access-5srg5\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.970975 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-config\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.971000 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125eaa50-dfc7-4d81-8e49-28c62e080939-serving-cert\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.971082 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-client-ca\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.971107 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-proxy-ca-bundles\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.973720 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6466b9bcb-hkw2b"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.977053 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:38:47 crc kubenswrapper[4902]: I0121 14:38:47.980433 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-668f58b975-nxpbh"] Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.071997 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-client-ca\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072119 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-proxy-ca-bundles\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072176 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5srg5\" (UniqueName: \"kubernetes.io/projected/125eaa50-dfc7-4d81-8e49-28c62e080939-kube-api-access-5srg5\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072229 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-config\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.072264 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125eaa50-dfc7-4d81-8e49-28c62e080939-serving-cert\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.073107 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-client-ca\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.073461 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-proxy-ca-bundles\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.073717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125eaa50-dfc7-4d81-8e49-28c62e080939-config\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.086895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/125eaa50-dfc7-4d81-8e49-28c62e080939-serving-cert\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.094275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5srg5\" (UniqueName: \"kubernetes.io/projected/125eaa50-dfc7-4d81-8e49-28c62e080939-kube-api-access-5srg5\") pod \"controller-manager-bd978d84c-b5724\" (UID: \"125eaa50-dfc7-4d81-8e49-28c62e080939\") " pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.175639 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.301621 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79a8460-e7c3-4c10-b5b9-6626715eb24a" path="/var/lib/kubelet/pods/a79a8460-e7c3-4c10-b5b9-6626715eb24a/volumes" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.302978 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb38b0db-02f2-4797-831b-baadb29db220" path="/var/lib/kubelet/pods/cb38b0db-02f2-4797-831b-baadb29db220/volumes" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.619197 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd978d84c-b5724"] Jan 21 14:38:48 crc kubenswrapper[4902]: W0121 14:38:48.624269 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125eaa50_dfc7_4d81_8e49_28c62e080939.slice/crio-7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4 WatchSource:0}: Error finding container 7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4: Status 404 returned error can't find the container with id 7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4 Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.817080 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.817949 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820318 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820339 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820419 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820487 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.820325 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.821301 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.832451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887253 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887313 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887514 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.887693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.939783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" event={"ID":"125eaa50-dfc7-4d81-8e49-28c62e080939","Type":"ContainerStarted","Data":"70a7c205fbbabf5baec83c64bd2a844890660d39714ab1e0611b23c452ed7f43"} Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.939834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" event={"ID":"125eaa50-dfc7-4d81-8e49-28c62e080939","Type":"ContainerStarted","Data":"7e5d892850f253ed31b54fcf96bbe2dd5879eb1ddef0410a13f4406da7111df4"} Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.940030 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.970133 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" podStartSLOduration=2.970115194 podStartE2EDuration="2.970115194s" podCreationTimestamp="2026-01-21 14:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:48.965963675 +0000 UTC m=+291.042796704" watchObservedRunningTime="2026-01-21 14:38:48.970115194 +0000 UTC m=+291.046948223" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989586 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.989697 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.990769 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.991141 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:48 crc kubenswrapper[4902]: I0121 14:38:48.996784 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd978d84c-b5724" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.009473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.016671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"route-controller-manager-54b864cd9c-j48hh\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.153897 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.332667 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:38:49 crc kubenswrapper[4902]: W0121 14:38:49.337741 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c29a84_66f0_4537_b266_ec000b3bd70e.slice/crio-26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7 WatchSource:0}: Error finding container 26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7: Status 404 returned error can't find the container with id 26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7 Jan 21 14:38:49 crc kubenswrapper[4902]: I0121 14:38:49.945535 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerStarted","Data":"26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7"} Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.955928 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerStarted","Data":"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45"} Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.956899 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.965626 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:38:50 crc kubenswrapper[4902]: I0121 14:38:50.983866 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" podStartSLOduration=4.983832945 podStartE2EDuration="4.983832945s" podCreationTimestamp="2026-01-21 14:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:38:50.978721296 +0000 UTC m=+293.055554325" watchObservedRunningTime="2026-01-21 14:38:50.983832945 +0000 UTC m=+293.060665974" Jan 21 14:38:58 crc kubenswrapper[4902]: I0121 14:38:58.147638 4902 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 14:39:05 crc kubenswrapper[4902]: I0121 14:39:05.912138 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" containerID="cri-o://7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83" gracePeriod=30 Jan 21 14:39:06 crc kubenswrapper[4902]: I0121 14:39:06.715521 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:39:06 crc kubenswrapper[4902]: I0121 14:39:06.715767 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" containerID="cri-o://45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" gracePeriod=30 Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.766175 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.811984 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:39:07 crc kubenswrapper[4902]: E0121 14:39:07.812224 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.812235 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.812334 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerName="route-controller-manager" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.812731 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.813283 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.824950 4902 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9nccj container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.26:5000/healthz\": dial tcp 10.217.0.26:5000: connect: connection refused" start-of-body= Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.827177 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.26:5000/healthz\": dial tcp 10.217.0.26:5000: connect: connection refused" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861093 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861182 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861273 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.861369 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963176 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963266 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963295 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") pod \"19c29a84-66f0-4537-b266-ec000b3bd70e\" (UID: \"19c29a84-66f0-4537-b266-ec000b3bd70e\") " Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963420 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963473 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.963557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.964067 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config" (OuterVolumeSpecName: "config") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.964516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca" (OuterVolumeSpecName: "client-ca") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.964882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.965170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.969285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h" (OuterVolumeSpecName: "kube-api-access-lx55h") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "kube-api-access-lx55h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.969813 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.975201 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "19c29a84-66f0-4537-b266-ec000b3bd70e" (UID: "19c29a84-66f0-4537-b266-ec000b3bd70e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:39:07 crc kubenswrapper[4902]: I0121 14:39:07.991881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"route-controller-manager-7b895dcf8-vkrtf\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.043533 4902 generic.go:334] "Generic (PLEG): container finished" podID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerID="7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83" exitCode=0 Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.043600 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerDied","Data":"7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83"} Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044840 4902 generic.go:334] "Generic (PLEG): container finished" podID="19c29a84-66f0-4537-b266-ec000b3bd70e" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" exitCode=0 Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044864 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerDied","Data":"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45"} Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" event={"ID":"19c29a84-66f0-4537-b266-ec000b3bd70e","Type":"ContainerDied","Data":"26466d6a0f3dd74552659838d2ebe09425c9b03a70d960e33a6db94de70ceba7"} Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044913 4902 scope.go:117] "RemoveContainer" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.044918 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.060297 4902 scope.go:117] "RemoveContainer" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" Jan 21 14:39:08 crc kubenswrapper[4902]: E0121 14:39:08.061480 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45\": container with ID starting with 45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45 not found: ID does not exist" containerID="45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.061531 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45"} err="failed to get container status \"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45\": rpc error: code = NotFound desc = could not find container \"45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45\": container with ID starting with 45f0811a55b9e6da8c88983a29fa68efdfc76431ce33dc5a818292c3a37acb45 not found: ID does not exist" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065268 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19c29a84-66f0-4537-b266-ec000b3bd70e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065299 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx55h\" (UniqueName: \"kubernetes.io/projected/19c29a84-66f0-4537-b266-ec000b3bd70e-kube-api-access-lx55h\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065313 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.065325 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/19c29a84-66f0-4537-b266-ec000b3bd70e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.077603 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.082405 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-j48hh"] Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.132291 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.305391 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c29a84-66f0-4537-b266-ec000b3bd70e" path="/var/lib/kubelet/pods/19c29a84-66f0-4537-b266-ec000b3bd70e/volumes" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.372018 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.554832 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:39:08 crc kubenswrapper[4902]: W0121 14:39:08.563425 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c457167_3537_454e_9813_8d4368a5c81a.slice/crio-ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724 WatchSource:0}: Error finding container ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724: Status 404 returned error can't find the container with id ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724 Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569659 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569708 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569745 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569767 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569795 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569850 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.569877 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.570036 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\" (UID: \"2e95c252-bd71-44fe-a8f1-d9a346d8a882\") " Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.570821 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.571872 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.575116 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m" (OuterVolumeSpecName: "kube-api-access-r9l2m") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "kube-api-access-r9l2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.575717 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.575820 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.576152 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.589949 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.597913 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2e95c252-bd71-44fe-a8f1-d9a346d8a882" (UID: "2e95c252-bd71-44fe-a8f1-d9a346d8a882"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672002 4902 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672030 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672056 4902 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e95c252-bd71-44fe-a8f1-d9a346d8a882-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672067 4902 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672077 4902 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672085 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9l2m\" (UniqueName: \"kubernetes.io/projected/2e95c252-bd71-44fe-a8f1-d9a346d8a882-kube-api-access-r9l2m\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:08 crc kubenswrapper[4902]: I0121 14:39:08.672094 4902 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e95c252-bd71-44fe-a8f1-d9a346d8a882-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.050852 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerStarted","Data":"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a"} Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.050898 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerStarted","Data":"ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724"} Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.051084 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.052344 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" event={"ID":"2e95c252-bd71-44fe-a8f1-d9a346d8a882","Type":"ContainerDied","Data":"72fa44f70a1a8a5c4b377700f7f908db843af15c5da8c33d09c4e26da32bbe19"} Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.052391 4902 scope.go:117] "RemoveContainer" containerID="7547a62e909793d452303b8e38ed4e3709638a07c8cd2df82117a97266265a83" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.052500 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9nccj" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.059115 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.081124 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" podStartSLOduration=3.081103946 podStartE2EDuration="3.081103946s" podCreationTimestamp="2026-01-21 14:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:39:09.078407582 +0000 UTC m=+311.155240611" watchObservedRunningTime="2026-01-21 14:39:09.081103946 +0000 UTC m=+311.157936975" Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.092654 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:39:09 crc kubenswrapper[4902]: I0121 14:39:09.101731 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9nccj"] Jan 21 14:39:10 crc kubenswrapper[4902]: I0121 14:39:10.300867 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" path="/var/lib/kubelet/pods/2e95c252-bd71-44fe-a8f1-d9a346d8a882/volumes" Jan 21 14:40:06 crc kubenswrapper[4902]: I0121 14:40:06.758315 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:40:06 crc kubenswrapper[4902]: I0121 14:40:06.759307 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" containerID="cri-o://6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" gracePeriod=30 Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.247590 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368058 4902 generic.go:334] "Generic (PLEG): container finished" podID="8c457167-3537-454e-9813-8d4368a5c81a" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" exitCode=0 Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerDied","Data":"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a"} Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" event={"ID":"8c457167-3537-454e-9813-8d4368a5c81a","Type":"ContainerDied","Data":"ae70df3ba9716f535c67df452b7e64d459d015c7500af14b1cae99a0a0635724"} Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368143 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.368164 4902 scope.go:117] "RemoveContainer" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.387445 4902 scope.go:117] "RemoveContainer" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" Jan 21 14:40:07 crc kubenswrapper[4902]: E0121 14:40:07.388732 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a\": container with ID starting with 6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a not found: ID does not exist" containerID="6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.388801 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a"} err="failed to get container status \"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a\": rpc error: code = NotFound desc = could not find container \"6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a\": container with ID starting with 6d924d34f92ca175efd2fa91d1d9aa2155d239a07c56d8adc145fd674418df7a not found: ID does not exist" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410733 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410861 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.410956 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") pod \"8c457167-3537-454e-9813-8d4368a5c81a\" (UID: \"8c457167-3537-454e-9813-8d4368a5c81a\") " Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.411899 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config" (OuterVolumeSpecName: "config") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.412302 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.422341 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9" (OuterVolumeSpecName: "kube-api-access-9hqf9") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "kube-api-access-9hqf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.422544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c457167-3537-454e-9813-8d4368a5c81a" (UID: "8c457167-3537-454e-9813-8d4368a5c81a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513111 4902 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c457167-3537-454e-9813-8d4368a5c81a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513157 4902 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513170 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hqf9\" (UniqueName: \"kubernetes.io/projected/8c457167-3537-454e-9813-8d4368a5c81a-kube-api-access-9hqf9\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.513183 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c457167-3537-454e-9813-8d4368a5c81a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.694223 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.699566 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b895dcf8-vkrtf"] Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874164 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt"] Jan 21 14:40:07 crc kubenswrapper[4902]: E0121 14:40:07.874407 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874425 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" Jan 21 14:40:07 crc kubenswrapper[4902]: E0121 14:40:07.874471 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874480 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874620 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e95c252-bd71-44fe-a8f1-d9a346d8a882" containerName="registry" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.874640 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c457167-3537-454e-9813-8d4368a5c81a" containerName="route-controller-manager" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.875060 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.876723 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882098 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882312 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882327 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882837 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.882915 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:40:07 crc kubenswrapper[4902]: I0121 14:40:07.892170 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt"] Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018626 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-client-ca\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6x9k\" (UniqueName: \"kubernetes.io/projected/73d979ed-d4af-414c-911c-d6246f682f19-kube-api-access-n6x9k\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018751 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-config\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.018779 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73d979ed-d4af-414c-911c-d6246f682f19-serving-cert\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-config\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119715 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73d979ed-d4af-414c-911c-d6246f682f19-serving-cert\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119777 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-client-ca\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.119810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6x9k\" (UniqueName: \"kubernetes.io/projected/73d979ed-d4af-414c-911c-d6246f682f19-kube-api-access-n6x9k\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.121025 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-config\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.121923 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73d979ed-d4af-414c-911c-d6246f682f19-client-ca\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.129909 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73d979ed-d4af-414c-911c-d6246f682f19-serving-cert\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.150395 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6x9k\" (UniqueName: \"kubernetes.io/projected/73d979ed-d4af-414c-911c-d6246f682f19-kube-api-access-n6x9k\") pod \"route-controller-manager-54b864cd9c-lm5kt\" (UID: \"73d979ed-d4af-414c-911c-d6246f682f19\") " pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.188740 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.305909 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c457167-3537-454e-9813-8d4368a5c81a" path="/var/lib/kubelet/pods/8c457167-3537-454e-9813-8d4368a5c81a/volumes" Jan 21 14:40:08 crc kubenswrapper[4902]: I0121 14:40:08.641195 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt"] Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.386416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" event={"ID":"73d979ed-d4af-414c-911c-d6246f682f19","Type":"ContainerStarted","Data":"8b59272b0e8ef0c26929cebefe1f549c93e075b724ad829848efd3dbe8b75b87"} Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.389400 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" event={"ID":"73d979ed-d4af-414c-911c-d6246f682f19","Type":"ContainerStarted","Data":"2f8af44cbe969b1d62717c4fd7f965210b4ef59ce492b827dd2dcb705d902642"} Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.389531 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.444988 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" Jan 21 14:40:09 crc kubenswrapper[4902]: I0121 14:40:09.466334 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b864cd9c-lm5kt" podStartSLOduration=3.466314067 podStartE2EDuration="3.466314067s" podCreationTimestamp="2026-01-21 14:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:40:09.409480252 +0000 UTC m=+371.486313301" watchObservedRunningTime="2026-01-21 14:40:09.466314067 +0000 UTC m=+371.543147096" Jan 21 14:40:17 crc kubenswrapper[4902]: I0121 14:40:17.769743 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:40:17 crc kubenswrapper[4902]: I0121 14:40:17.770345 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:40:47 crc kubenswrapper[4902]: I0121 14:40:47.770448 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:40:47 crc kubenswrapper[4902]: I0121 14:40:47.771155 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.769571 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.770285 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.770343 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.771115 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:41:17 crc kubenswrapper[4902]: I0121 14:41:17.771182 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9" gracePeriod=600 Jan 21 14:41:18 crc kubenswrapper[4902]: I0121 14:41:18.787471 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9" exitCode=0 Jan 21 14:41:18 crc kubenswrapper[4902]: I0121 14:41:18.787721 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9"} Jan 21 14:41:18 crc kubenswrapper[4902]: I0121 14:41:18.787869 4902 scope.go:117] "RemoveContainer" containerID="67176559067f1cb815d3139251d627afe15ecb171187c72adeb011d36b7fb388" Jan 21 14:41:19 crc kubenswrapper[4902]: I0121 14:41:19.808226 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e"} Jan 21 14:43:47 crc kubenswrapper[4902]: I0121 14:43:47.769724 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:43:47 crc kubenswrapper[4902]: I0121 14:43:47.770297 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:44:17 crc kubenswrapper[4902]: I0121 14:44:17.770939 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:44:17 crc kubenswrapper[4902]: I0121 14:44:17.773183 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.825646 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827150 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" containerID="cri-o://f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827231 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" containerID="cri-o://cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827350 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" containerID="cri-o://99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827447 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827529 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" containerID="cri-o://1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827607 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" containerID="cri-o://f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.827809 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" containerID="cri-o://8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.904185 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" containerID="cri-o://1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" gracePeriod=30 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.973088 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975635 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/1.log" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975690 4902 generic.go:334] "Generic (PLEG): container finished" podID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" exitCode=2 Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975726 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerDied","Data":"5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d"} Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.975766 4902 scope.go:117] "RemoveContainer" containerID="1743ac03027a8aa41f958deac88876bf3266eea1682fd05b1026657687440fc6" Jan 21 14:44:34 crc kubenswrapper[4902]: I0121 14:44:34.976394 4902 scope.go:117] "RemoveContainer" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" Jan 21 14:44:34 crc kubenswrapper[4902]: E0121 14:44:34.976657 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mztd6_openshift-multus(037b55cf-cb9e-41ce-8b1e-3898f490a4aa)\"" pod="openshift-multus/multus-mztd6" podUID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.562535 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.565633 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-acl-logging/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.566470 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-controller/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.567133 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.624623 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dhrhm"] Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.624972 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625074 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625154 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625210 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625263 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625312 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625364 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625462 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625532 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625589 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625639 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625694 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625738 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625790 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625838 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625891 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.625938 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.625983 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.626126 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kubecfg-setup" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626248 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kubecfg-setup" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.626303 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626353 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626473 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626534 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="sbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626587 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626668 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626735 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626788 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626840 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-acl-logging" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626887 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="northd" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626937 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="nbdb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.626990 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovn-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.627063 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="kube-rbac-proxy-node" Jan 21 14:44:35 crc kubenswrapper[4902]: E0121 14:44:35.627217 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.627293 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.627425 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerName="ovnkube-controller" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.631958 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710434 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710505 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710529 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710549 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710565 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710568 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket" (OuterVolumeSpecName: "log-socket") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710568 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710610 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710613 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710638 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710610 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710707 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710731 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710938 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.710978 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711002 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711026 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711037 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711063 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711079 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711098 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711100 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711167 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711175 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711196 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log" (OuterVolumeSpecName: "node-log") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711212 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711224 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711288 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash" (OuterVolumeSpecName: "host-slash") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711417 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711735 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711440 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") pod \"0ec3a89a-830c-4274-8c1e-bd3c98120708\" (UID: \"0ec3a89a-830c-4274-8c1e-bd3c98120708\") " Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711467 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfs6\" (UniqueName: \"kubernetes.io/projected/4efb5f30-d596-48cb-8fd7-85968f522bb6-kube-api-access-lnfs6\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711917 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-log-socket\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711965 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.711999 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-netd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712018 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-script-lib\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712129 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-bin\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712211 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-slash\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-etc-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712299 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-env-overrides\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712316 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712339 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-kubelet\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712372 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-systemd-units\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712392 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovn-node-metrics-cert\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712408 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-node-log\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712443 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-config\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712459 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712479 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-netns\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712495 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-systemd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712543 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-var-lib-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712558 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-ovn\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712652 4902 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712664 4902 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712767 4902 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712793 4902 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712808 4902 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712820 4902 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712832 4902 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712843 4902 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712857 4902 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712870 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712882 4902 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712893 4902 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712904 4902 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712915 4902 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712925 4902 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712935 4902 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.712947 4902 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.715846 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8" (OuterVolumeSpecName: "kube-api-access-fcxq8") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "kube-api-access-fcxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.716144 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.723379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0ec3a89a-830c-4274-8c1e-bd3c98120708" (UID: "0ec3a89a-830c-4274-8c1e-bd3c98120708"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813832 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813915 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-netd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813922 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813947 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-script-lib\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.813996 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-bin\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-slash\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814077 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-etc-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-env-overrides\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814104 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-netd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814135 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-slash\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814117 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-cni-bin\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814147 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814125 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814285 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-kubelet\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814339 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-systemd-units\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovn-node-metrics-cert\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-node-log\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-config\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814484 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-netns\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814567 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-systemd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-var-lib-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-ovn\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfs6\" (UniqueName: \"kubernetes.io/projected/4efb5f30-d596-48cb-8fd7-85968f522bb6-kube-api-access-lnfs6\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-env-overrides\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814784 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-log-socket\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814799 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-netns\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-kubelet\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814187 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-etc-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814898 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-var-lib-openvswitch\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814911 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-node-log\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-log-socket\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814939 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-systemd\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.814985 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-run-ovn\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815258 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-host-run-ovn-kubernetes\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815316 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-config\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815320 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcxq8\" (UniqueName: \"kubernetes.io/projected/0ec3a89a-830c-4274-8c1e-bd3c98120708-kube-api-access-fcxq8\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815348 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4efb5f30-d596-48cb-8fd7-85968f522bb6-systemd-units\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815355 4902 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec3a89a-830c-4274-8c1e-bd3c98120708-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.815379 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec3a89a-830c-4274-8c1e-bd3c98120708-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.816320 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovnkube-script-lib\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.825211 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4efb5f30-d596-48cb-8fd7-85968f522bb6-ovn-node-metrics-cert\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.835190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfs6\" (UniqueName: \"kubernetes.io/projected/4efb5f30-d596-48cb-8fd7-85968f522bb6-kube-api-access-lnfs6\") pod \"ovnkube-node-dhrhm\" (UID: \"4efb5f30-d596-48cb-8fd7-85968f522bb6\") " pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.952115 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.986222 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.987750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"ad059d3c04662441495a96a8cd0280ba57077575c4151ab3d7af23456bd3809f"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.990606 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovnkube-controller/3.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.994237 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-acl-logging/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.995253 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8l7jc_0ec3a89a-830c-4274-8c1e-bd3c98120708/ovn-controller/0.log" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996001 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996125 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996220 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996406 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996497 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996120 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996610 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996633 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996646 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996658 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996801 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996256 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996813 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996822 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996830 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996837 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996844 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996851 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996857 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996881 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.996570 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" exitCode=0 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.997304 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" exitCode=143 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.997380 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ec3a89a-830c-4274-8c1e-bd3c98120708" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" exitCode=143 Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.997375 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998600 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998631 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998643 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998653 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998692 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998703 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998712 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998721 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998735 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998744 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998791 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998811 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998824 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998834 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998872 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998881 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998891 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998903 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998912 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998980 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.998996 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8l7jc" event={"ID":"0ec3a89a-830c-4274-8c1e-bd3c98120708","Type":"ContainerDied","Data":"5ce6899ab2b12b8f4895228356fb88bbef937550a4743b5874ab9aba66a78a98"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999030 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999081 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999094 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999106 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999115 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999125 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999134 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999168 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999179 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} Jan 21 14:44:35 crc kubenswrapper[4902]: I0121 14:44:35.999188 4902 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.032679 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.053830 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.056841 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.059487 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8l7jc"] Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.070413 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.081645 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.096136 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.097051 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec3a89a_830c_4274_8c1e_bd3c98120708.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec3a89a_830c_4274_8c1e_bd3c98120708.slice/crio-5ce6899ab2b12b8f4895228356fb88bbef937550a4743b5874ab9aba66a78a98\": RecentStats: unable to find data in memory cache]" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.111756 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.166637 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.178773 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.204919 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.230416 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.231088 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231166 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231199 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.231687 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231726 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.231755 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.232168 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232230 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232274 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.232754 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232785 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.232805 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.233222 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233253 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233274 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.233615 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233644 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.233665 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.234095 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234120 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234139 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.234497 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234526 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234542 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.234887 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234913 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.234928 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: E0121 14:44:36.235342 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235391 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235426 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235847 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.235892 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236258 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236302 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236709 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.236742 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237181 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237209 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237516 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.237538 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238165 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238207 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238549 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238578 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238892 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.238931 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239274 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239301 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239627 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239659 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239946 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.239980 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240305 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240332 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240607 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240633 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240928 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.240961 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241465 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241494 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241792 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.241815 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242197 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242235 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242710 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.242745 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243133 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243157 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243549 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243580 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243901 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.243936 4902 scope.go:117] "RemoveContainer" containerID="16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244281 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e"} err="failed to get container status \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": rpc error: code = NotFound desc = could not find container \"16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e\": container with ID starting with 16dce8a774665c34bd2d21df5e1c39db6e4e6a93c6f4efcbc2d39fb8000db85e not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244306 4902 scope.go:117] "RemoveContainer" containerID="8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244565 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6"} err="failed to get container status \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": rpc error: code = NotFound desc = could not find container \"8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6\": container with ID starting with 8ecf7661f7a42b0b8562d463f014d9735836032c69223b4f608ac8132dc2f8f6 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244592 4902 scope.go:117] "RemoveContainer" containerID="cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244877 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9"} err="failed to get container status \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": rpc error: code = NotFound desc = could not find container \"cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9\": container with ID starting with cc48c12a94cd79f0cdcd2e43eda3c1120baaf827e1e2a0b3de03ad601d4719e9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.244901 4902 scope.go:117] "RemoveContainer" containerID="99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245175 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240"} err="failed to get container status \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": rpc error: code = NotFound desc = could not find container \"99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240\": container with ID starting with 99554bed9cbb934845044e30d35de4b4e37099aad12b1fce03eb079949421240 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245199 4902 scope.go:117] "RemoveContainer" containerID="82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245444 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8"} err="failed to get container status \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": rpc error: code = NotFound desc = could not find container \"82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8\": container with ID starting with 82f29c30c7ac60621ed81358e2443b3f16bf507c1939fa8e036d403d8dd24ce8 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245480 4902 scope.go:117] "RemoveContainer" containerID="1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245896 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321"} err="failed to get container status \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": rpc error: code = NotFound desc = could not find container \"1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321\": container with ID starting with 1b9499cd88c63e8d9664bdf09134ad2c09f7394e02bc143924d9579b8796e321 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.245920 4902 scope.go:117] "RemoveContainer" containerID="f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246207 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d"} err="failed to get container status \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": rpc error: code = NotFound desc = could not find container \"f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d\": container with ID starting with f8738f1dadb1e9623c9f1e834618051f5ce9356c303d19d26fef30303120089d not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246241 4902 scope.go:117] "RemoveContainer" containerID="f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246658 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b"} err="failed to get container status \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": rpc error: code = NotFound desc = could not find container \"f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b\": container with ID starting with f86065fd81c1f29fd66740a8baed7fe206109ff2785422690ad3db2d421b533b not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246699 4902 scope.go:117] "RemoveContainer" containerID="5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246962 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9"} err="failed to get container status \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": rpc error: code = NotFound desc = could not find container \"5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9\": container with ID starting with 5b3d3229ca4649f4e85c8797c66f2394f62d96c58b19294a970d52bc53f283a9 not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.246989 4902 scope.go:117] "RemoveContainer" containerID="1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.247292 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb"} err="failed to get container status \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": rpc error: code = NotFound desc = could not find container \"1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb\": container with ID starting with 1b7bcd53266fa9db980929eb0419f6522252534a32c5e22c31e395d5297e99fb not found: ID does not exist" Jan 21 14:44:36 crc kubenswrapper[4902]: I0121 14:44:36.303939 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec3a89a-830c-4274-8c1e-bd3c98120708" path="/var/lib/kubelet/pods/0ec3a89a-830c-4274-8c1e-bd3c98120708/volumes" Jan 21 14:44:37 crc kubenswrapper[4902]: I0121 14:44:37.006018 4902 generic.go:334] "Generic (PLEG): container finished" podID="4efb5f30-d596-48cb-8fd7-85968f522bb6" containerID="4e55abc3c8cffd91a1f936cf36cb221f25e916ae78ccfb0be036073e2cc4d481" exitCode=0 Jan 21 14:44:37 crc kubenswrapper[4902]: I0121 14:44:37.006082 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerDied","Data":"4e55abc3c8cffd91a1f936cf36cb221f25e916ae78ccfb0be036073e2cc4d481"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.017595 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"52a01671f2cdbe46ee209fecd24ce54d63691986995ab91fe19ecce4c14d9684"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"586db90d7b9ba7aeedf26e2fce60c044446f9fa89706859a2ee71a4a21fec242"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"a008fea5f1bbd714f64bc241037ee503c1f687d265bd6d2a0f26ae1dde8fc1c7"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"218c31c02fabec107cd708634f79587f806de91b1b369776e06b394c5890bc31"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019358 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"5c77c1ff71629f4c3f1ed5f377987bf3648a72717f5a18e9f99ade3a00bf1d08"} Jan 21 14:44:38 crc kubenswrapper[4902]: I0121 14:44:38.019436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"ef658df00fa72c2c2d41ecb4029b789c15b2ea8c2c6c7dcbe002bd573f027b17"} Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.040411 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"3efae7cce7888dd17e09a54ccf0d60d54ab81978ba3ba8a6b07376413f1e8114"} Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.585918 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.586729 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.588943 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.589444 4902 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-qwwr2" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.589785 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590744 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590968 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.590510 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.692492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.692687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.692782 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.693196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.694496 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.725540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"crc-storage-crc-2v7g4\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: I0121 14:44:41.908165 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.944713 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.945102 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.945337 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:41 crc kubenswrapper[4902]: E0121 14:44:41.945712 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(1ebf17af0782bc29382b336ac529768568dfcf589051ddd131b8de039bf40135): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2v7g4" podUID="33301553-deaa-4183-9538-1a43f822be80" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.057980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" event={"ID":"4efb5f30-d596-48cb-8fd7-85968f522bb6","Type":"ContainerStarted","Data":"ce4721052f072f29cf2301c45a9bd8fe0a0061b719a0d060c59820ebdb9e2aa9"} Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.058479 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.058495 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.058540 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.081102 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.081191 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.081546 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.091256 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.094560 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" podStartSLOduration=8.094539419 podStartE2EDuration="8.094539419s" podCreationTimestamp="2026-01-21 14:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:44:43.093403966 +0000 UTC m=+645.170237015" watchObservedRunningTime="2026-01-21 14:44:43.094539419 +0000 UTC m=+645.171372448" Jan 21 14:44:43 crc kubenswrapper[4902]: I0121 14:44:43.095271 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.110996 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.111063 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.111090 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:43 crc kubenswrapper[4902]: E0121 14:44:43.111132 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(eecf7cd942b09f7468f82c9ae28745d547ac1dd0f9c0d8a044fdbefa0d073b41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2v7g4" podUID="33301553-deaa-4183-9538-1a43f822be80" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.770733 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.771617 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.772464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.773333 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:44:47 crc kubenswrapper[4902]: I0121 14:44:47.773433 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e" gracePeriod=600 Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.091276 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e" exitCode=0 Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.091328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e"} Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.091363 4902 scope.go:117] "RemoveContainer" containerID="55b2a83cf4462f21e140aaf547deeb73f9aa69b5d7dddabe47e579030fe921f9" Jan 21 14:44:48 crc kubenswrapper[4902]: I0121 14:44:48.299692 4902 scope.go:117] "RemoveContainer" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" Jan 21 14:44:48 crc kubenswrapper[4902]: E0121 14:44:48.300082 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-mztd6_openshift-multus(037b55cf-cb9e-41ce-8b1e-3898f490a4aa)\"" pod="openshift-multus/multus-mztd6" podUID="037b55cf-cb9e-41ce-8b1e-3898f490a4aa" Jan 21 14:44:49 crc kubenswrapper[4902]: I0121 14:44:49.101494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca"} Jan 21 14:44:54 crc kubenswrapper[4902]: I0121 14:44:54.294196 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: I0121 14:44:54.294886 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.325646 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.326182 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.326231 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:44:54 crc kubenswrapper[4902]: E0121 14:44:54.326421 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2v7g4_crc-storage(33301553-deaa-4183-9538-1a43f822be80)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2v7g4_crc-storage_33301553-deaa-4183-9538-1a43f822be80_0(103d44a9baa15814086ecf3f6c4e014fcd678e9792c7f66f94dfd9a695f7ac69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2v7g4" podUID="33301553-deaa-4183-9538-1a43f822be80" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.155851 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.157508 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.158979 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.159449 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.172077 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.341184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.341528 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.341591 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.443309 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.443732 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.443944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.444762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.450241 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.460847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"collect-profiles-29483445-2whx2\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: I0121 14:45:00.480231 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503537 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503614 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503642 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:00 crc kubenswrapper[4902]: E0121 14:45:00.503700 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(7122897a24392e7a89d724864ece6d32a75bfb47dad155ca4c7294299109b38a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" Jan 21 14:45:01 crc kubenswrapper[4902]: I0121 14:45:01.180672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: I0121 14:45:01.181926 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.218942 4902 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.219300 4902 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.219331 4902 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:01 crc kubenswrapper[4902]: E0121 14:45:01.219391 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager(0fbc78bb-1faf-4da9-ab79-cee1540bb647)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29483445-2whx2_openshift-operator-lifecycle-manager_0fbc78bb-1faf-4da9-ab79-cee1540bb647_0(26f898ada82329d6d422c780fae53f157c7f1038550e131a957546c157bdf041): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" Jan 21 14:45:02 crc kubenswrapper[4902]: I0121 14:45:02.295367 4902 scope.go:117] "RemoveContainer" containerID="5db75faf330517f6e52171754b04634f6e477d49b65357ee3295df0a7560fb4d" Jan 21 14:45:03 crc kubenswrapper[4902]: I0121 14:45:03.197206 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 14:45:03 crc kubenswrapper[4902]: I0121 14:45:03.197298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mztd6" event={"ID":"037b55cf-cb9e-41ce-8b1e-3898f490a4aa","Type":"ContainerStarted","Data":"51154431438b475e544119556d5d3665be92d7fa8bff56e1cd8614a93dda6ab2"} Jan 21 14:45:05 crc kubenswrapper[4902]: I0121 14:45:05.978851 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dhrhm" Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.294236 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.299429 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.767999 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 14:45:08 crc kubenswrapper[4902]: I0121 14:45:08.780506 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:45:09 crc kubenswrapper[4902]: I0121 14:45:09.235696 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2v7g4" event={"ID":"33301553-deaa-4183-9538-1a43f822be80","Type":"ContainerStarted","Data":"168dd8050c7d704f577789dc61a56a850b7ce18ed7ff065b9ba17798be68a97b"} Jan 21 14:45:11 crc kubenswrapper[4902]: I0121 14:45:11.249165 4902 generic.go:334] "Generic (PLEG): container finished" podID="33301553-deaa-4183-9538-1a43f822be80" containerID="bea584749b1ccfd891d97d3ebbaf45ab41b6cc3e6efd100d0aa2c6701cc97c94" exitCode=0 Jan 21 14:45:11 crc kubenswrapper[4902]: I0121 14:45:11.249289 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2v7g4" event={"ID":"33301553-deaa-4183-9538-1a43f822be80","Type":"ContainerDied","Data":"bea584749b1ccfd891d97d3ebbaf45ab41b6cc3e6efd100d0aa2c6701cc97c94"} Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.294907 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.295699 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.534850 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.539322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:12 crc kubenswrapper[4902]: W0121 14:45:12.540238 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbc78bb_1faf_4da9_ab79_cee1540bb647.slice/crio-e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba WatchSource:0}: Error finding container e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba: Status 404 returned error can't find the container with id e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721398 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") pod \"33301553-deaa-4183-9538-1a43f822be80\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") pod \"33301553-deaa-4183-9538-1a43f822be80\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "33301553-deaa-4183-9538-1a43f822be80" (UID: "33301553-deaa-4183-9538-1a43f822be80"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.721935 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") pod \"33301553-deaa-4183-9538-1a43f822be80\" (UID: \"33301553-deaa-4183-9538-1a43f822be80\") " Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.722247 4902 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/33301553-deaa-4183-9538-1a43f822be80-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.727262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r" (OuterVolumeSpecName: "kube-api-access-4qb5r") pod "33301553-deaa-4183-9538-1a43f822be80" (UID: "33301553-deaa-4183-9538-1a43f822be80"). InnerVolumeSpecName "kube-api-access-4qb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.735551 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "33301553-deaa-4183-9538-1a43f822be80" (UID: "33301553-deaa-4183-9538-1a43f822be80"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.823218 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qb5r\" (UniqueName: \"kubernetes.io/projected/33301553-deaa-4183-9538-1a43f822be80-kube-api-access-4qb5r\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:12 crc kubenswrapper[4902]: I0121 14:45:12.823522 4902 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/33301553-deaa-4183-9538-1a43f822be80-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.263905 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" event={"ID":"0fbc78bb-1faf-4da9-ab79-cee1540bb647","Type":"ContainerDied","Data":"fa1156cf23ef6713ff3d92ca234f6e5140ae3f940464e50453ee6dd138fecf3b"} Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.263706 4902 generic.go:334] "Generic (PLEG): container finished" podID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerID="fa1156cf23ef6713ff3d92ca234f6e5140ae3f940464e50453ee6dd138fecf3b" exitCode=0 Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.264466 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" event={"ID":"0fbc78bb-1faf-4da9-ab79-cee1540bb647","Type":"ContainerStarted","Data":"e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba"} Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.268474 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2v7g4" event={"ID":"33301553-deaa-4183-9538-1a43f822be80","Type":"ContainerDied","Data":"168dd8050c7d704f577789dc61a56a850b7ce18ed7ff065b9ba17798be68a97b"} Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.268569 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168dd8050c7d704f577789dc61a56a850b7ce18ed7ff065b9ba17798be68a97b" Jan 21 14:45:13 crc kubenswrapper[4902]: I0121 14:45:13.268640 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2v7g4" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.523172 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.646319 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") pod \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.646427 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") pod \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.646464 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") pod \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\" (UID: \"0fbc78bb-1faf-4da9-ab79-cee1540bb647\") " Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.647237 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fbc78bb-1faf-4da9-ab79-cee1540bb647" (UID: "0fbc78bb-1faf-4da9-ab79-cee1540bb647"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.650781 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fbc78bb-1faf-4da9-ab79-cee1540bb647" (UID: "0fbc78bb-1faf-4da9-ab79-cee1540bb647"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.650985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd" (OuterVolumeSpecName: "kube-api-access-t6jxd") pod "0fbc78bb-1faf-4da9-ab79-cee1540bb647" (UID: "0fbc78bb-1faf-4da9-ab79-cee1540bb647"). InnerVolumeSpecName "kube-api-access-t6jxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.747552 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbc78bb-1faf-4da9-ab79-cee1540bb647-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.747591 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6jxd\" (UniqueName: \"kubernetes.io/projected/0fbc78bb-1faf-4da9-ab79-cee1540bb647-kube-api-access-t6jxd\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:14 crc kubenswrapper[4902]: I0121 14:45:14.747605 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fbc78bb-1faf-4da9-ab79-cee1540bb647-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:15 crc kubenswrapper[4902]: I0121 14:45:15.297575 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" event={"ID":"0fbc78bb-1faf-4da9-ab79-cee1540bb647","Type":"ContainerDied","Data":"e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba"} Jan 21 14:45:15 crc kubenswrapper[4902]: I0121 14:45:15.297605 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2" Jan 21 14:45:15 crc kubenswrapper[4902]: I0121 14:45:15.297626 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1215334675642ac402cd6b84bd24c9b626f11f54c3e47b52d28c09e9c7a92ba" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.307829 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr"] Jan 21 14:45:20 crc kubenswrapper[4902]: E0121 14:45:20.309839 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerName="collect-profiles" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.309956 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerName="collect-profiles" Jan 21 14:45:20 crc kubenswrapper[4902]: E0121 14:45:20.310090 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33301553-deaa-4183-9538-1a43f822be80" containerName="storage" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.310189 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="33301553-deaa-4183-9538-1a43f822be80" containerName="storage" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.310429 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" containerName="collect-profiles" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.310543 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="33301553-deaa-4183-9538-1a43f822be80" containerName="storage" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.311783 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.316584 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.318322 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr"] Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.434367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.434450 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.434691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.536507 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.536614 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.536655 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.537344 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.537435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.570633 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.636630 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:20 crc kubenswrapper[4902]: I0121 14:45:20.886164 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr"] Jan 21 14:45:21 crc kubenswrapper[4902]: I0121 14:45:21.341463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerStarted","Data":"b875986b0761d560b02bc44d8e8fbac72883a5463f133e7f56fd7b5d8ec459b9"} Jan 21 14:45:21 crc kubenswrapper[4902]: I0121 14:45:21.341916 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerStarted","Data":"53ff9133fd502ef47a35311e474dd325d1f71b4604c5261be01cc0ef2bfd0077"} Jan 21 14:45:22 crc kubenswrapper[4902]: I0121 14:45:22.349424 4902 generic.go:334] "Generic (PLEG): container finished" podID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerID="b875986b0761d560b02bc44d8e8fbac72883a5463f133e7f56fd7b5d8ec459b9" exitCode=0 Jan 21 14:45:22 crc kubenswrapper[4902]: I0121 14:45:22.349474 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"b875986b0761d560b02bc44d8e8fbac72883a5463f133e7f56fd7b5d8ec459b9"} Jan 21 14:45:24 crc kubenswrapper[4902]: I0121 14:45:24.364848 4902 generic.go:334] "Generic (PLEG): container finished" podID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerID="fc4ddac622b7528ae9797cbb3577446dc6fe3fbbbec9e87f4584f273623fe288" exitCode=0 Jan 21 14:45:24 crc kubenswrapper[4902]: I0121 14:45:24.364950 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"fc4ddac622b7528ae9797cbb3577446dc6fe3fbbbec9e87f4584f273623fe288"} Jan 21 14:45:25 crc kubenswrapper[4902]: I0121 14:45:25.375542 4902 generic.go:334] "Generic (PLEG): container finished" podID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerID="f486c46ee39ff38efaa5d9e3f69a67ba9a92bec84c3df2b335d54ce2d6581843" exitCode=0 Jan 21 14:45:25 crc kubenswrapper[4902]: I0121 14:45:25.375703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"f486c46ee39ff38efaa5d9e3f69a67ba9a92bec84c3df2b335d54ce2d6581843"} Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.640547 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.737885 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") pod \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.738239 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") pod \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.738343 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") pod \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\" (UID: \"91ab62d2-e4b6-44ce-afc8-292ac5685c46\") " Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.739393 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle" (OuterVolumeSpecName: "bundle") pod "91ab62d2-e4b6-44ce-afc8-292ac5685c46" (UID: "91ab62d2-e4b6-44ce-afc8-292ac5685c46"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.743720 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q" (OuterVolumeSpecName: "kube-api-access-2424q") pod "91ab62d2-e4b6-44ce-afc8-292ac5685c46" (UID: "91ab62d2-e4b6-44ce-afc8-292ac5685c46"). InnerVolumeSpecName "kube-api-access-2424q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.835291 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util" (OuterVolumeSpecName: "util") pod "91ab62d2-e4b6-44ce-afc8-292ac5685c46" (UID: "91ab62d2-e4b6-44ce-afc8-292ac5685c46"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.839357 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.839380 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2424q\" (UniqueName: \"kubernetes.io/projected/91ab62d2-e4b6-44ce-afc8-292ac5685c46-kube-api-access-2424q\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:26 crc kubenswrapper[4902]: I0121 14:45:26.839389 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91ab62d2-e4b6-44ce-afc8-292ac5685c46-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:27 crc kubenswrapper[4902]: I0121 14:45:27.394844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" event={"ID":"91ab62d2-e4b6-44ce-afc8-292ac5685c46","Type":"ContainerDied","Data":"53ff9133fd502ef47a35311e474dd325d1f71b4604c5261be01cc0ef2bfd0077"} Jan 21 14:45:27 crc kubenswrapper[4902]: I0121 14:45:27.394889 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ff9133fd502ef47a35311e474dd325d1f71b4604c5261be01cc0ef2bfd0077" Jan 21 14:45:27 crc kubenswrapper[4902]: I0121 14:45:27.395291 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.823989 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2fs2"] Jan 21 14:45:31 crc kubenswrapper[4902]: E0121 14:45:31.824436 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="pull" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824447 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="pull" Jan 21 14:45:31 crc kubenswrapper[4902]: E0121 14:45:31.824465 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="util" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824471 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="util" Jan 21 14:45:31 crc kubenswrapper[4902]: E0121 14:45:31.824481 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="extract" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824486 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="extract" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.824569 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ab62d2-e4b6-44ce-afc8-292ac5685c46" containerName="extract" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.825023 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.826781 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.830269 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.833812 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tfdrx" Jan 21 14:45:31 crc kubenswrapper[4902]: I0121 14:45:31.837097 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2fs2"] Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.009924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8fg\" (UniqueName: \"kubernetes.io/projected/bb74694a-8b82-4c31-85da-4ba2c732bbb8-kube-api-access-th8fg\") pod \"nmstate-operator-646758c888-q2fs2\" (UID: \"bb74694a-8b82-4c31-85da-4ba2c732bbb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.111646 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8fg\" (UniqueName: \"kubernetes.io/projected/bb74694a-8b82-4c31-85da-4ba2c732bbb8-kube-api-access-th8fg\") pod \"nmstate-operator-646758c888-q2fs2\" (UID: \"bb74694a-8b82-4c31-85da-4ba2c732bbb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.128890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8fg\" (UniqueName: \"kubernetes.io/projected/bb74694a-8b82-4c31-85da-4ba2c732bbb8-kube-api-access-th8fg\") pod \"nmstate-operator-646758c888-q2fs2\" (UID: \"bb74694a-8b82-4c31-85da-4ba2c732bbb8\") " pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.140454 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.324372 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q2fs2"] Jan 21 14:45:32 crc kubenswrapper[4902]: I0121 14:45:32.419948 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" event={"ID":"bb74694a-8b82-4c31-85da-4ba2c732bbb8","Type":"ContainerStarted","Data":"6e89956a53dcb676d83fe8b19783f8035fff664a5f3e7aac8df4398e7b326d9d"} Jan 21 14:45:35 crc kubenswrapper[4902]: I0121 14:45:35.435849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" event={"ID":"bb74694a-8b82-4c31-85da-4ba2c732bbb8","Type":"ContainerStarted","Data":"a7b1fa21778d2b3c170a53ce867376fa99da09058a09bbdd87fdc9bb2b7c47cd"} Jan 21 14:45:35 crc kubenswrapper[4902]: I0121 14:45:35.457040 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-q2fs2" podStartSLOduration=1.858482731 podStartE2EDuration="4.456991326s" podCreationTimestamp="2026-01-21 14:45:31 +0000 UTC" firstStartedPulling="2026-01-21 14:45:32.334741543 +0000 UTC m=+694.411574572" lastFinishedPulling="2026-01-21 14:45:34.933250138 +0000 UTC m=+697.010083167" observedRunningTime="2026-01-21 14:45:35.452877947 +0000 UTC m=+697.529710976" watchObservedRunningTime="2026-01-21 14:45:35.456991326 +0000 UTC m=+697.533824355" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.944705 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-x6qnj"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.946250 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.966891 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8plgt" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.967798 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.968642 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.970072 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.973034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-x6qnj"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.978784 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.983969 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-p9t9n"] Jan 21 14:45:40 crc kubenswrapper[4902]: I0121 14:45:40.985603 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.063009 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.063815 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.065748 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.065783 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nh8rq" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.065902 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.069697 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-ovs-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136718 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntmc\" (UniqueName: \"kubernetes.io/projected/14dd02e5-8cb3-4382-9107-5f5b698a2701-kube-api-access-kntmc\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136754 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rg2\" (UniqueName: \"kubernetes.io/projected/87768889-c41f-4563-8b38-3d939fa22303-kube-api-access-f7rg2\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136774 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-dbus-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87768889-c41f-4563-8b38-3d939fa22303-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136823 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pg4\" (UniqueName: \"kubernetes.io/projected/d406f136-7416-4694-b6cd-d6bdf6b60e1f-kube-api-access-j7pg4\") pod \"nmstate-metrics-54757c584b-x6qnj\" (UID: \"d406f136-7416-4694-b6cd-d6bdf6b60e1f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.136879 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-nmstate-lock\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237826 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntmc\" (UniqueName: \"kubernetes.io/projected/14dd02e5-8cb3-4382-9107-5f5b698a2701-kube-api-access-kntmc\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237883 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7rg2\" (UniqueName: \"kubernetes.io/projected/87768889-c41f-4563-8b38-3d939fa22303-kube-api-access-f7rg2\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-dbus-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2828t\" (UniqueName: \"kubernetes.io/projected/ce3bf701-2498-42d7-969d-8944df02f1c7-kube-api-access-2828t\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237946 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87768889-c41f-4563-8b38-3d939fa22303-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237961 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pg4\" (UniqueName: \"kubernetes.io/projected/d406f136-7416-4694-b6cd-d6bdf6b60e1f-kube-api-access-j7pg4\") pod \"nmstate-metrics-54757c584b-x6qnj\" (UID: \"d406f136-7416-4694-b6cd-d6bdf6b60e1f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.237989 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ce3bf701-2498-42d7-969d-8944df02f1c7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238019 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-nmstate-lock\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238073 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-ovs-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238105 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-dbus-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-nmstate-lock\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.238933 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/14dd02e5-8cb3-4382-9107-5f5b698a2701-ovs-socket\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.245700 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/87768889-c41f-4563-8b38-3d939fa22303-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.264032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pg4\" (UniqueName: \"kubernetes.io/projected/d406f136-7416-4694-b6cd-d6bdf6b60e1f-kube-api-access-j7pg4\") pod \"nmstate-metrics-54757c584b-x6qnj\" (UID: \"d406f136-7416-4694-b6cd-d6bdf6b60e1f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.264601 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.271431 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7rg2\" (UniqueName: \"kubernetes.io/projected/87768889-c41f-4563-8b38-3d939fa22303-kube-api-access-f7rg2\") pod \"nmstate-webhook-8474b5b9d8-88bkr\" (UID: \"87768889-c41f-4563-8b38-3d939fa22303\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.272915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntmc\" (UniqueName: \"kubernetes.io/projected/14dd02e5-8cb3-4382-9107-5f5b698a2701-kube-api-access-kntmc\") pod \"nmstate-handler-p9t9n\" (UID: \"14dd02e5-8cb3-4382-9107-5f5b698a2701\") " pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.295184 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.301624 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.321707 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d465bdf6b-lmlwx"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.322464 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.341572 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ce3bf701-2498-42d7-969d-8944df02f1c7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.341656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.341683 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2828t\" (UniqueName: \"kubernetes.io/projected/ce3bf701-2498-42d7-969d-8944df02f1c7-kube-api-access-2828t\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: E0121 14:45:41.342335 4902 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 14:45:41 crc kubenswrapper[4902]: E0121 14:45:41.342413 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert podName:ce3bf701-2498-42d7-969d-8944df02f1c7 nodeName:}" failed. No retries permitted until 2026-01-21 14:45:41.842392043 +0000 UTC m=+703.919225072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-6vz5c" (UID: "ce3bf701-2498-42d7-969d-8944df02f1c7") : secret "plugin-serving-cert" not found Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.343265 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d465bdf6b-lmlwx"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.343351 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ce3bf701-2498-42d7-969d-8944df02f1c7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: W0121 14:45:41.357605 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14dd02e5_8cb3_4382_9107_5f5b698a2701.slice/crio-f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3 WatchSource:0}: Error finding container f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3: Status 404 returned error can't find the container with id f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3 Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.372537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2828t\" (UniqueName: \"kubernetes.io/projected/ce3bf701-2498-42d7-969d-8944df02f1c7-kube-api-access-2828t\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.442641 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-trusted-ca-bundle\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.442685 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp2gl\" (UniqueName: \"kubernetes.io/projected/08e0dea0-bfea-427f-b481-61e8d54dee3b-kube-api-access-pp2gl\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.442717 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443727 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-oauth-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443771 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-service-ca\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443802 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-oauth-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.443820 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.476068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p9t9n" event={"ID":"14dd02e5-8cb3-4382-9107-5f5b698a2701","Type":"ContainerStarted","Data":"f685391b36c90acca2dff7bd4e5a34f530bb4c9af6a6b96d1a20eabb461594b3"} Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-service-ca\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545526 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-oauth-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545544 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545586 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-trusted-ca-bundle\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545614 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp2gl\" (UniqueName: \"kubernetes.io/projected/08e0dea0-bfea-427f-b481-61e8d54dee3b-kube-api-access-pp2gl\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545637 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.545672 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-oauth-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.547568 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.549203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-service-ca\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.550269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-trusted-ca-bundle\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.551685 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08e0dea0-bfea-427f-b481-61e8d54dee3b-oauth-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.551864 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-oauth-config\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.553409 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08e0dea0-bfea-427f-b481-61e8d54dee3b-console-serving-cert\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.558004 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-x6qnj"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.572744 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp2gl\" (UniqueName: \"kubernetes.io/projected/08e0dea0-bfea-427f-b481-61e8d54dee3b-kube-api-access-pp2gl\") pod \"console-5d465bdf6b-lmlwx\" (UID: \"08e0dea0-bfea-427f-b481-61e8d54dee3b\") " pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.679713 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.822862 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr"] Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.849577 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.853553 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3bf701-2498-42d7-969d-8944df02f1c7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-6vz5c\" (UID: \"ce3bf701-2498-42d7-969d-8944df02f1c7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.893124 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d465bdf6b-lmlwx"] Jan 21 14:45:41 crc kubenswrapper[4902]: W0121 14:45:41.894850 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e0dea0_bfea_427f_b481_61e8d54dee3b.slice/crio-29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728 WatchSource:0}: Error finding container 29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728: Status 404 returned error can't find the container with id 29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728 Jan 21 14:45:41 crc kubenswrapper[4902]: I0121 14:45:41.987449 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.194149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c"] Jan 21 14:45:42 crc kubenswrapper[4902]: W0121 14:45:42.199080 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3bf701_2498_42d7_969d_8944df02f1c7.slice/crio-37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4 WatchSource:0}: Error finding container 37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4: Status 404 returned error can't find the container with id 37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4 Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.485242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" event={"ID":"87768889-c41f-4563-8b38-3d939fa22303","Type":"ContainerStarted","Data":"355c10a1670be8e994cc74d7b45bc4fc91ecaa1527aea0a4e848d6330a572126"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.487012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" event={"ID":"d406f136-7416-4694-b6cd-d6bdf6b60e1f","Type":"ContainerStarted","Data":"062dda8ed40fcbefabba7bfe9e9599c40bd3cfcfa12864c1841541ad12cb2094"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.488010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" event={"ID":"ce3bf701-2498-42d7-969d-8944df02f1c7","Type":"ContainerStarted","Data":"37c376c3b774a4ece34265c7e4379b250597a8de58cdc3d164fa14514d7104d4"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.489758 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d465bdf6b-lmlwx" event={"ID":"08e0dea0-bfea-427f-b481-61e8d54dee3b","Type":"ContainerStarted","Data":"17a160a25df378fe575baaee56b145bb427fd51c26e693c0e43d6b048dc0119b"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.489788 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d465bdf6b-lmlwx" event={"ID":"08e0dea0-bfea-427f-b481-61e8d54dee3b","Type":"ContainerStarted","Data":"29104ebeb778472fb768bbf7e0966114429742a82f494e7703718863a2882728"} Jan 21 14:45:42 crc kubenswrapper[4902]: I0121 14:45:42.510181 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d465bdf6b-lmlwx" podStartSLOduration=1.5101570720000002 podStartE2EDuration="1.510157072s" podCreationTimestamp="2026-01-21 14:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:45:42.50559224 +0000 UTC m=+704.582425299" watchObservedRunningTime="2026-01-21 14:45:42.510157072 +0000 UTC m=+704.586990101" Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.505408 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" event={"ID":"87768889-c41f-4563-8b38-3d939fa22303","Type":"ContainerStarted","Data":"513c8d8798bebd9b85401a861b031560b4c21ec0614f2f08f971782e45df7d10"} Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.507325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.510766 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" event={"ID":"d406f136-7416-4694-b6cd-d6bdf6b60e1f","Type":"ContainerStarted","Data":"04ace925f9f9a2cc1e1d5d5f94788d8b87b4fc28aa4736b9413cdb9869730af8"} Jan 21 14:45:44 crc kubenswrapper[4902]: I0121 14:45:44.530661 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" podStartSLOduration=2.06139104 podStartE2EDuration="4.530632902s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:41.833632581 +0000 UTC m=+703.910465610" lastFinishedPulling="2026-01-21 14:45:44.302874443 +0000 UTC m=+706.379707472" observedRunningTime="2026-01-21 14:45:44.529642513 +0000 UTC m=+706.606475552" watchObservedRunningTime="2026-01-21 14:45:44.530632902 +0000 UTC m=+706.607465931" Jan 21 14:45:45 crc kubenswrapper[4902]: I0121 14:45:45.520410 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p9t9n" event={"ID":"14dd02e5-8cb3-4382-9107-5f5b698a2701","Type":"ContainerStarted","Data":"f53b88ff4b2343318e18fbdd3016cf4ff4a79803161c4c527b9e2876631249a3"} Jan 21 14:45:45 crc kubenswrapper[4902]: I0121 14:45:45.520780 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:45 crc kubenswrapper[4902]: I0121 14:45:45.542332 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-p9t9n" podStartSLOduration=2.66624941 podStartE2EDuration="5.542312923s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:41.371226625 +0000 UTC m=+703.448059654" lastFinishedPulling="2026-01-21 14:45:44.247290118 +0000 UTC m=+706.324123167" observedRunningTime="2026-01-21 14:45:45.542148678 +0000 UTC m=+707.618981747" watchObservedRunningTime="2026-01-21 14:45:45.542312923 +0000 UTC m=+707.619145952" Jan 21 14:45:46 crc kubenswrapper[4902]: I0121 14:45:46.532172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" event={"ID":"ce3bf701-2498-42d7-969d-8944df02f1c7","Type":"ContainerStarted","Data":"cab0a46ee65cc527ba9f68932d1600b34fb1e18af80107f6c6fca9ad6a595d2c"} Jan 21 14:45:46 crc kubenswrapper[4902]: I0121 14:45:46.557539 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-6vz5c" podStartSLOduration=2.167596421 podStartE2EDuration="5.557513076s" podCreationTimestamp="2026-01-21 14:45:41 +0000 UTC" firstStartedPulling="2026-01-21 14:45:42.201828326 +0000 UTC m=+704.278661345" lastFinishedPulling="2026-01-21 14:45:45.591744971 +0000 UTC m=+707.668578000" observedRunningTime="2026-01-21 14:45:46.549111374 +0000 UTC m=+708.625944423" watchObservedRunningTime="2026-01-21 14:45:46.557513076 +0000 UTC m=+708.634346115" Jan 21 14:45:47 crc kubenswrapper[4902]: I0121 14:45:47.538077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" event={"ID":"d406f136-7416-4694-b6cd-d6bdf6b60e1f","Type":"ContainerStarted","Data":"6e3db7c12b35283423b30efb8976925ddcb8d39fb4900c2b8f2650c2603179f4"} Jan 21 14:45:47 crc kubenswrapper[4902]: I0121 14:45:47.563487 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-x6qnj" podStartSLOduration=2.138001753 podStartE2EDuration="7.563457333s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:41.563363165 +0000 UTC m=+703.640196194" lastFinishedPulling="2026-01-21 14:45:46.988818725 +0000 UTC m=+709.065651774" observedRunningTime="2026-01-21 14:45:47.554138874 +0000 UTC m=+709.630971983" watchObservedRunningTime="2026-01-21 14:45:47.563457333 +0000 UTC m=+709.640290392" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.333350 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-p9t9n" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.680275 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.680372 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:51 crc kubenswrapper[4902]: I0121 14:45:51.688555 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:52 crc kubenswrapper[4902]: I0121 14:45:52.580334 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d465bdf6b-lmlwx" Jan 21 14:45:52 crc kubenswrapper[4902]: I0121 14:45:52.639960 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:46:01 crc kubenswrapper[4902]: I0121 14:46:01.301941 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-88bkr" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.646928 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw"] Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.648622 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.651810 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.659941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw"] Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.789594 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.789700 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.789894 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.891808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.891859 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.891917 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.892376 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.892542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.919954 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:15 crc kubenswrapper[4902]: I0121 14:46:15.965011 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.153119 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw"] Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.708581 4902 generic.go:334] "Generic (PLEG): container finished" podID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerID="056949580ed2fcf95c59c991288fe924f7c3867e56878d30dca4327cb8866274" exitCode=0 Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.708630 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"056949580ed2fcf95c59c991288fe924f7c3867e56878d30dca4327cb8866274"} Jan 21 14:46:16 crc kubenswrapper[4902]: I0121 14:46:16.708841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerStarted","Data":"2a81671ed1bad6d4359aa6ae00fb18ca71b35101ea8e33f3a97da3ccb4d25f90"} Jan 21 14:46:17 crc kubenswrapper[4902]: I0121 14:46:17.676212 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9nw4v" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" containerID="cri-o://fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0" gracePeriod=15 Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.721582 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9nw4v_853f0809-8828-4976-9b04-dd078ab64ced/console/0.log" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.721651 4902 generic.go:334] "Generic (PLEG): container finished" podID="853f0809-8828-4976-9b04-dd078ab64ced" containerID="fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0" exitCode=2 Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.721697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerDied","Data":"fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0"} Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.819082 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9nw4v_853f0809-8828-4976-9b04-dd078ab64ced/console/0.log" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.819385 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929608 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929686 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929757 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929834 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929897 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.929960 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") pod \"853f0809-8828-4976-9b04-dd078ab64ced\" (UID: \"853f0809-8828-4976-9b04-dd078ab64ced\") " Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930651 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config" (OuterVolumeSpecName: "console-config") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930698 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.930881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca" (OuterVolumeSpecName: "service-ca") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.935214 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.935274 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p" (OuterVolumeSpecName: "kube-api-access-xg77p") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "kube-api-access-xg77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:18 crc kubenswrapper[4902]: I0121 14:46:18.936150 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "853f0809-8828-4976-9b04-dd078ab64ced" (UID: "853f0809-8828-4976-9b04-dd078ab64ced"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031472 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg77p\" (UniqueName: \"kubernetes.io/projected/853f0809-8828-4976-9b04-dd078ab64ced-kube-api-access-xg77p\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031520 4902 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031535 4902 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031547 4902 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031559 4902 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031570 4902 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/853f0809-8828-4976-9b04-dd078ab64ced-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.031581 4902 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/853f0809-8828-4976-9b04-dd078ab64ced-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.731890 4902 generic.go:334] "Generic (PLEG): container finished" podID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerID="8dd4a2451a9a58dac129a4d9b7533aef9b799c33e51b43be85745f80c57a2168" exitCode=0 Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.732007 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"8dd4a2451a9a58dac129a4d9b7533aef9b799c33e51b43be85745f80c57a2168"} Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.736935 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9nw4v_853f0809-8828-4976-9b04-dd078ab64ced/console/0.log" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.737000 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9nw4v" event={"ID":"853f0809-8828-4976-9b04-dd078ab64ced","Type":"ContainerDied","Data":"11dbd86a6b371ca7401386f5e9d390f798d2eff9c897fbde80c73fd4547eac53"} Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.737069 4902 scope.go:117] "RemoveContainer" containerID="fff0e780f43c17189c7dce1045515753af56428025b126e2b903e1fb3882c9d0" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.737154 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9nw4v" Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.844938 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:46:19 crc kubenswrapper[4902]: I0121 14:46:19.856592 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9nw4v"] Jan 21 14:46:20 crc kubenswrapper[4902]: I0121 14:46:20.302367 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853f0809-8828-4976-9b04-dd078ab64ced" path="/var/lib/kubelet/pods/853f0809-8828-4976-9b04-dd078ab64ced/volumes" Jan 21 14:46:20 crc kubenswrapper[4902]: I0121 14:46:20.747619 4902 generic.go:334] "Generic (PLEG): container finished" podID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerID="3ad4015d5c8074c484485ea54b519048f0f8f14c880d8f9b8af977414a857907" exitCode=0 Jan 21 14:46:20 crc kubenswrapper[4902]: I0121 14:46:20.747768 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"3ad4015d5c8074c484485ea54b519048f0f8f14c880d8f9b8af977414a857907"} Jan 21 14:46:21 crc kubenswrapper[4902]: I0121 14:46:21.993592 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.171498 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") pod \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.171604 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") pod \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.171656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") pod \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\" (UID: \"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea\") " Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.172392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle" (OuterVolumeSpecName: "bundle") pod "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" (UID: "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.178312 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6" (OuterVolumeSpecName: "kube-api-access-r6gp6") pod "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" (UID: "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea"). InnerVolumeSpecName "kube-api-access-r6gp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.182203 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util" (OuterVolumeSpecName: "util") pod "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" (UID: "5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.272851 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6gp6\" (UniqueName: \"kubernetes.io/projected/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-kube-api-access-r6gp6\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.272906 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.272929 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.761884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" event={"ID":"5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea","Type":"ContainerDied","Data":"2a81671ed1bad6d4359aa6ae00fb18ca71b35101ea8e33f3a97da3ccb4d25f90"} Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.761918 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a81671ed1bad6d4359aa6ae00fb18ca71b35101ea8e33f3a97da3ccb4d25f90" Jan 21 14:46:22 crc kubenswrapper[4902]: I0121 14:46:22.761949 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw" Jan 21 14:46:32 crc kubenswrapper[4902]: I0121 14:46:32.113012 4902 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092040 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68"] Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092826 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092841 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092859 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="pull" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092867 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="pull" Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092893 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="util" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092901 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="util" Jan 21 14:46:38 crc kubenswrapper[4902]: E0121 14:46:38.092911 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="extract" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.092918 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="extract" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.093031 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea" containerName="extract" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.093070 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="853f0809-8828-4976-9b04-dd078ab64ced" containerName="console" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.093502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.097888 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.098216 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.098609 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.098746 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.099562 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7wtgs" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.113379 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68"] Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.171083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9bt\" (UniqueName: \"kubernetes.io/projected/1ddec7fa-7afd-4d77-af77-509910e52c70-kube-api-access-fs9bt\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.171183 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-apiservice-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.171211 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-webhook-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.272110 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-apiservice-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.272401 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-webhook-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.272524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs9bt\" (UniqueName: \"kubernetes.io/projected/1ddec7fa-7afd-4d77-af77-509910e52c70-kube-api-access-fs9bt\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.278525 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-apiservice-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.282811 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ddec7fa-7afd-4d77-af77-509910e52c70-webhook-cert\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.310647 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs9bt\" (UniqueName: \"kubernetes.io/projected/1ddec7fa-7afd-4d77-af77-509910e52c70-kube-api-access-fs9bt\") pod \"metallb-operator-controller-manager-6c6bfc4dcb-mzr68\" (UID: \"1ddec7fa-7afd-4d77-af77-509910e52c70\") " pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.409542 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.429616 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn"] Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.430320 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.433179 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.453881 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.459173 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn"] Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.459848 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tm4rt" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.578715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlzb\" (UniqueName: \"kubernetes.io/projected/050f3d44-1ff2-4334-8fa8-c5124c7199d9-kube-api-access-xwlzb\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.579095 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-apiservice-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.579194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-webhook-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.680926 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-apiservice-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.680964 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-webhook-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.681000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlzb\" (UniqueName: \"kubernetes.io/projected/050f3d44-1ff2-4334-8fa8-c5124c7199d9-kube-api-access-xwlzb\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.685743 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-webhook-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.685813 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/050f3d44-1ff2-4334-8fa8-c5124c7199d9-apiservice-cert\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.695804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlzb\" (UniqueName: \"kubernetes.io/projected/050f3d44-1ff2-4334-8fa8-c5124c7199d9-kube-api-access-xwlzb\") pod \"metallb-operator-webhook-server-79cc595b65-5xnzn\" (UID: \"050f3d44-1ff2-4334-8fa8-c5124c7199d9\") " pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.793677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:38 crc kubenswrapper[4902]: I0121 14:46:38.882294 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68"] Jan 21 14:46:38 crc kubenswrapper[4902]: W0121 14:46:38.895445 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ddec7fa_7afd_4d77_af77_509910e52c70.slice/crio-9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b WatchSource:0}: Error finding container 9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b: Status 404 returned error can't find the container with id 9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b Jan 21 14:46:39 crc kubenswrapper[4902]: I0121 14:46:39.015240 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn"] Jan 21 14:46:39 crc kubenswrapper[4902]: W0121 14:46:39.024278 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050f3d44_1ff2_4334_8fa8_c5124c7199d9.slice/crio-e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863 WatchSource:0}: Error finding container e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863: Status 404 returned error can't find the container with id e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863 Jan 21 14:46:39 crc kubenswrapper[4902]: I0121 14:46:39.866602 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" event={"ID":"1ddec7fa-7afd-4d77-af77-509910e52c70","Type":"ContainerStarted","Data":"9eb019b3ebc29d56f418f06e561e1946d0224b3fa30fb264add3c7e61150309b"} Jan 21 14:46:39 crc kubenswrapper[4902]: I0121 14:46:39.867860 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" event={"ID":"050f3d44-1ff2-4334-8fa8-c5124c7199d9","Type":"ContainerStarted","Data":"e6061992f15f7c646e86924c7a52c204e160c01ed4bf75ca4f08665781fe2863"} Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.891715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" event={"ID":"050f3d44-1ff2-4334-8fa8-c5124c7199d9","Type":"ContainerStarted","Data":"d0d579ffc2ae50b314775f0c499c83a641ccc1d184e9970f60e46ef2957e16a0"} Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.892190 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.893350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" event={"ID":"1ddec7fa-7afd-4d77-af77-509910e52c70","Type":"ContainerStarted","Data":"ffee974c6326a69b230da23362ffb000b08e60c3f1bc0cdbea4581eaaa918bd3"} Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.893548 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.920362 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" podStartSLOduration=1.4386846580000001 podStartE2EDuration="5.920341788s" podCreationTimestamp="2026-01-21 14:46:38 +0000 UTC" firstStartedPulling="2026-01-21 14:46:39.027233067 +0000 UTC m=+761.104066096" lastFinishedPulling="2026-01-21 14:46:43.508890197 +0000 UTC m=+765.585723226" observedRunningTime="2026-01-21 14:46:43.915617403 +0000 UTC m=+765.992450472" watchObservedRunningTime="2026-01-21 14:46:43.920341788 +0000 UTC m=+765.997174817" Jan 21 14:46:43 crc kubenswrapper[4902]: I0121 14:46:43.938584 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" podStartSLOduration=1.350254058 podStartE2EDuration="5.938560982s" podCreationTimestamp="2026-01-21 14:46:38 +0000 UTC" firstStartedPulling="2026-01-21 14:46:38.899163498 +0000 UTC m=+760.975996537" lastFinishedPulling="2026-01-21 14:46:43.487470432 +0000 UTC m=+765.564303461" observedRunningTime="2026-01-21 14:46:43.934773623 +0000 UTC m=+766.011606662" watchObservedRunningTime="2026-01-21 14:46:43.938560982 +0000 UTC m=+766.015394021" Jan 21 14:46:58 crc kubenswrapper[4902]: I0121 14:46:58.798753 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79cc595b65-5xnzn" Jan 21 14:47:17 crc kubenswrapper[4902]: I0121 14:47:17.769420 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:47:17 crc kubenswrapper[4902]: I0121 14:47:17.769990 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:47:18 crc kubenswrapper[4902]: I0121 14:47:18.412621 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c6bfc4dcb-mzr68" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.210542 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xpzj8"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.213137 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.215259 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.216093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.216853 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.216896 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.217079 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z9vpk" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.217822 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.227396 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275513 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics-certs\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275616 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-conf\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275646 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhsh\" (UniqueName: \"kubernetes.io/projected/6fc6639b-9150-4158-836f-1ffc1c4f5339-kube-api-access-fvhsh\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-startup\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275785 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5ss\" (UniqueName: \"kubernetes.io/projected/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-kube-api-access-bg5ss\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-reloader\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.275926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-sockets\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.321978 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5m6ct"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.322892 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.326094 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.326191 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.327471 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4mbx4" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.327775 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.335448 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-h2pgt"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.340112 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.348917 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.363152 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-h2pgt"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376711 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-reloader\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376781 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-sockets\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376812 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwhk\" (UniqueName: \"kubernetes.io/projected/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-kube-api-access-5kwhk\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376828 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376845 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376866 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics-certs\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376894 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-conf\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhsh\" (UniqueName: \"kubernetes.io/projected/6fc6639b-9150-4158-836f-1ffc1c4f5339-kube-api-access-fvhsh\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376941 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-startup\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376956 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metallb-excludel2\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.376987 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.377003 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5ss\" (UniqueName: \"kubernetes.io/projected/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-kube-api-access-bg5ss\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.377601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-reloader\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.377778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-sockets\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.378557 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-conf\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.378750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.379190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6fc6639b-9150-4158-836f-1ffc1c4f5339-frr-startup\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.383357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.386360 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6fc6639b-9150-4158-836f-1ffc1c4f5339-metrics-certs\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.395651 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5ss\" (UniqueName: \"kubernetes.io/projected/4f8bf62b-aae0-4080-a5ee-2472a60fe41f-kube-api-access-bg5ss\") pod \"frr-k8s-webhook-server-7df86c4f6c-72rgj\" (UID: \"4f8bf62b-aae0-4080-a5ee-2472a60fe41f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.415867 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhsh\" (UniqueName: \"kubernetes.io/projected/6fc6639b-9150-4158-836f-1ffc1c4f5339-kube-api-access-fvhsh\") pod \"frr-k8s-xpzj8\" (UID: \"6fc6639b-9150-4158-836f-1ffc1c4f5339\") " pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.478645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kwhk\" (UniqueName: \"kubernetes.io/projected/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-kube-api-access-5kwhk\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.479127 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.479242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-cert\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.479346 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-metrics-certs\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.479257 4902 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.479541 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs podName:4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:19.979507862 +0000 UTC m=+802.056340881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs") pod "speaker-5m6ct" (UID: "4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501") : secret "speaker-certs-secret" not found Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.480167 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn6d\" (UniqueName: \"kubernetes.io/projected/694bf42b-c612-44c2-964b-c91336b8afa1-kube-api-access-2fn6d\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.480326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metallb-excludel2\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.480460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.480642 4902 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.480816 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist podName:4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:19.980792918 +0000 UTC m=+802.057625947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist") pod "speaker-5m6ct" (UID: "4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501") : secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.481202 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metallb-excludel2\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.498542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kwhk\" (UniqueName: \"kubernetes.io/projected/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-kube-api-access-5kwhk\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.545018 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.557415 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.584010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-cert\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.584295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-metrics-certs\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.584366 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fn6d\" (UniqueName: \"kubernetes.io/projected/694bf42b-c612-44c2-964b-c91336b8afa1-kube-api-access-2fn6d\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.587272 4902 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.590592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-metrics-certs\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.597291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/694bf42b-c612-44c2-964b-c91336b8afa1-cert\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.601680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fn6d\" (UniqueName: \"kubernetes.io/projected/694bf42b-c612-44c2-964b-c91336b8afa1-kube-api-access-2fn6d\") pod \"controller-6968d8fdc4-h2pgt\" (UID: \"694bf42b-c612-44c2-964b-c91336b8afa1\") " pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.668948 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.781199 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj"] Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.988832 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.988900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.989079 4902 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: E0121 14:47:19.989240 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist podName:4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:20.989215143 +0000 UTC m=+803.066048202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist") pod "speaker-5m6ct" (UID: "4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501") : secret "metallb-memberlist" not found Jan 21 14:47:19 crc kubenswrapper[4902]: I0121 14:47:19.994471 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-metrics-certs\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:20 crc kubenswrapper[4902]: I0121 14:47:20.061897 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-h2pgt"] Jan 21 14:47:20 crc kubenswrapper[4902]: W0121 14:47:20.062227 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod694bf42b_c612_44c2_964b_c91336b8afa1.slice/crio-fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104 WatchSource:0}: Error finding container fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104: Status 404 returned error can't find the container with id fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104 Jan 21 14:47:20 crc kubenswrapper[4902]: I0121 14:47:20.097355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-h2pgt" event={"ID":"694bf42b-c612-44c2-964b-c91336b8afa1","Type":"ContainerStarted","Data":"fc563eb4070b9583b0d9ef6ecbd3c8ca154fb03044038095e7275d6e8955a104"} Jan 21 14:47:20 crc kubenswrapper[4902]: I0121 14:47:20.098674 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" event={"ID":"4f8bf62b-aae0-4080-a5ee-2472a60fe41f","Type":"ContainerStarted","Data":"3ebc5e0972d1ffc10954a17589e962b34e16d9c215cbf7732d46b98beb35449b"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.006382 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.011624 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501-memberlist\") pod \"speaker-5m6ct\" (UID: \"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501\") " pod="metallb-system/speaker-5m6ct" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.105873 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"d0d982d2efc1c22113547c6f8b2c34d2f38c9e25c7ad8b1d4994153eb7424112"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.107500 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-h2pgt" event={"ID":"694bf42b-c612-44c2-964b-c91336b8afa1","Type":"ContainerStarted","Data":"2dc60d565ba8f1f83ee0c26cd8fc741bc8f7fa061135cf861a1cbc885fda2c83"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.107532 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-h2pgt" event={"ID":"694bf42b-c612-44c2-964b-c91336b8afa1","Type":"ContainerStarted","Data":"25c8154f40f7603c4d8d3a5d2c09aab2847e96fd9b1220d7afe61265f5eacc6d"} Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.107669 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.144756 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-h2pgt" podStartSLOduration=2.144737557 podStartE2EDuration="2.144737557s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:47:21.139717706 +0000 UTC m=+803.216550735" watchObservedRunningTime="2026-01-21 14:47:21.144737557 +0000 UTC m=+803.221570576" Jan 21 14:47:21 crc kubenswrapper[4902]: I0121 14:47:21.144965 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:21 crc kubenswrapper[4902]: W0121 14:47:21.177746 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fbfffc0_8fac_4684_9cc8_2a3bcc3cb501.slice/crio-a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b WatchSource:0}: Error finding container a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b: Status 404 returned error can't find the container with id a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136380 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5m6ct" event={"ID":"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501","Type":"ContainerStarted","Data":"4fc0a47cc3cd82a1abbef70b7e66a4dda864f4b181f8a7bb00e4670b82e62a14"} Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5m6ct" event={"ID":"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501","Type":"ContainerStarted","Data":"229065e795540674d1aa9b00ea4d50f98e2e29a00f768bd69418f69dc9189cac"} Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136433 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5m6ct" event={"ID":"4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501","Type":"ContainerStarted","Data":"a70fc81265026e8babae3f6efb71ecce387fdea03a2b3759a5ae48cf5a1abb7b"} Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.136951 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:22 crc kubenswrapper[4902]: I0121 14:47:22.156751 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5m6ct" podStartSLOduration=3.156735049 podStartE2EDuration="3.156735049s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:47:22.153272382 +0000 UTC m=+804.230105411" watchObservedRunningTime="2026-01-21 14:47:22.156735049 +0000 UTC m=+804.233568078" Jan 21 14:47:27 crc kubenswrapper[4902]: I0121 14:47:27.176901 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" event={"ID":"4f8bf62b-aae0-4080-a5ee-2472a60fe41f","Type":"ContainerStarted","Data":"2a2242ac31d7b6d5e1f5b204f94d2bdf1563dd9b5c76f0dea6ea42216c2a245d"} Jan 21 14:47:28 crc kubenswrapper[4902]: I0121 14:47:28.184291 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerID="c40e98fb7dd2e59e131c732ec995037207c2554ff24797630bce0da5de4d7313" exitCode=0 Jan 21 14:47:28 crc kubenswrapper[4902]: I0121 14:47:28.184373 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerDied","Data":"c40e98fb7dd2e59e131c732ec995037207c2554ff24797630bce0da5de4d7313"} Jan 21 14:47:28 crc kubenswrapper[4902]: I0121 14:47:28.207978 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" podStartSLOduration=1.986228116 podStartE2EDuration="9.207957686s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="2026-01-21 14:47:19.793992453 +0000 UTC m=+801.870825482" lastFinishedPulling="2026-01-21 14:47:27.015722023 +0000 UTC m=+809.092555052" observedRunningTime="2026-01-21 14:47:28.202871394 +0000 UTC m=+810.279704433" watchObservedRunningTime="2026-01-21 14:47:28.207957686 +0000 UTC m=+810.284790705" Jan 21 14:47:29 crc kubenswrapper[4902]: I0121 14:47:29.191573 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerID="6eda013d84d74c5cd2bf23a93379960e8c3955d812e4b16d01246d6591b5b0f0" exitCode=0 Jan 21 14:47:29 crc kubenswrapper[4902]: I0121 14:47:29.191664 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerDied","Data":"6eda013d84d74c5cd2bf23a93379960e8c3955d812e4b16d01246d6591b5b0f0"} Jan 21 14:47:29 crc kubenswrapper[4902]: I0121 14:47:29.192075 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:30 crc kubenswrapper[4902]: I0121 14:47:30.200800 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerID="410f3ac73f83325698e7da819f9835d8c0c4f5701a1b23386391064a5f04454e" exitCode=0 Jan 21 14:47:30 crc kubenswrapper[4902]: I0121 14:47:30.200875 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerDied","Data":"410f3ac73f83325698e7da819f9835d8c0c4f5701a1b23386391064a5f04454e"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.154383 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5m6ct" Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.212985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"273414f1045bbfa16013d2380fed0883dbefd979d6449007cfb94e6c9f7fc4b0"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"aefcc73b21c339b9a55d374c0caaf50e585a32c4e5bdc8b6e3fda20de9709e6f"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213064 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"3f9e094ffe7c03f4513dca55d7eb8b90a6abcfd2d9fc2413ca33bc769e18f5cb"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213075 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"642d3400012bd7e6aea09781934cabf9f6d86a98a245d3730ef906a9a9c14b6f"} Jan 21 14:47:31 crc kubenswrapper[4902]: I0121 14:47:31.213086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"26a2ffbba2b4c68814f3f037364db515fed647f6b2e66468ba5f01a929a1b21b"} Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.223377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xpzj8" event={"ID":"6fc6639b-9150-4158-836f-1ffc1c4f5339","Type":"ContainerStarted","Data":"a4c38dbe988b16a8b5b7e5aa5fe888bb86be8a463b28730e6f363daf0391799f"} Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.223566 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.248912 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xpzj8" podStartSLOduration=6.883316598 podStartE2EDuration="13.248895521s" podCreationTimestamp="2026-01-21 14:47:19 +0000 UTC" firstStartedPulling="2026-01-21 14:47:20.670069488 +0000 UTC m=+802.746902537" lastFinishedPulling="2026-01-21 14:47:27.035648431 +0000 UTC m=+809.112481460" observedRunningTime="2026-01-21 14:47:32.247994255 +0000 UTC m=+814.324827284" watchObservedRunningTime="2026-01-21 14:47:32.248895521 +0000 UTC m=+814.325728550" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.950812 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj"] Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.952180 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.954171 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:47:32 crc kubenswrapper[4902]: I0121 14:47:32.964250 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj"] Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.070334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.070410 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.070601 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172139 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172583 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172631 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.172779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.194874 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.270007 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:33 crc kubenswrapper[4902]: I0121 14:47:33.723720 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj"] Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.240064 4902 generic.go:334] "Generic (PLEG): container finished" podID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerID="0a4fd68fee547445dae37cf6af19dedf174c5364ea24c8619910287dbe0d9338" exitCode=0 Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.240108 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"0a4fd68fee547445dae37cf6af19dedf174c5364ea24c8619910287dbe0d9338"} Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.240135 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerStarted","Data":"6962038ec7ce5218af0f0011e64727b716daf49cb4483a70fdeb952e8753254a"} Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.546111 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:34 crc kubenswrapper[4902]: I0121 14:47:34.616435 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.521883 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.523245 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.535308 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.618871 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.618982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.619007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.719766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.719863 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.719927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.720672 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.720773 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.738069 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"redhat-operators-fzz8m\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:36 crc kubenswrapper[4902]: I0121 14:47:36.843692 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:37 crc kubenswrapper[4902]: I0121 14:47:37.277311 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:37 crc kubenswrapper[4902]: W0121 14:47:37.289587 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a05951_5e50_40de_92cd_2e064b9251f6.slice/crio-9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697 WatchSource:0}: Error finding container 9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697: Status 404 returned error can't find the container with id 9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697 Jan 21 14:47:38 crc kubenswrapper[4902]: I0121 14:47:38.277678 4902 generic.go:334] "Generic (PLEG): container finished" podID="89a05951-5e50-40de-92cd-2e064b9251f6" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" exitCode=0 Jan 21 14:47:38 crc kubenswrapper[4902]: I0121 14:47:38.277795 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9"} Jan 21 14:47:38 crc kubenswrapper[4902]: I0121 14:47:38.278162 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerStarted","Data":"9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697"} Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.288427 4902 generic.go:334] "Generic (PLEG): container finished" podID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerID="d60a0eed6453f9d43c23a2f26215345a7e683257d0707aefa6f32dff6ebd53be" exitCode=0 Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.288513 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"d60a0eed6453f9d43c23a2f26215345a7e683257d0707aefa6f32dff6ebd53be"} Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.562521 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-72rgj" Jan 21 14:47:39 crc kubenswrapper[4902]: I0121 14:47:39.672933 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-h2pgt" Jan 21 14:47:40 crc kubenswrapper[4902]: I0121 14:47:40.300931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerStarted","Data":"fd0d369540fc1d16ed445672cd614b883042045aff9a1666bb0a4653dbd19f05"} Jan 21 14:47:41 crc kubenswrapper[4902]: I0121 14:47:41.307078 4902 generic.go:334] "Generic (PLEG): container finished" podID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerID="fd0d369540fc1d16ed445672cd614b883042045aff9a1666bb0a4653dbd19f05" exitCode=0 Jan 21 14:47:41 crc kubenswrapper[4902]: I0121 14:47:41.307368 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"fd0d369540fc1d16ed445672cd614b883042045aff9a1666bb0a4653dbd19f05"} Jan 21 14:47:41 crc kubenswrapper[4902]: I0121 14:47:41.324442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerStarted","Data":"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8"} Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.334121 4902 generic.go:334] "Generic (PLEG): container finished" podID="89a05951-5e50-40de-92cd-2e064b9251f6" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" exitCode=0 Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.334269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8"} Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.566165 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.597377 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") pod \"b4942197-db6e-4bb6-af6d-24694a007a0b\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.597485 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") pod \"b4942197-db6e-4bb6-af6d-24694a007a0b\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.597545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") pod \"b4942197-db6e-4bb6-af6d-24694a007a0b\" (UID: \"b4942197-db6e-4bb6-af6d-24694a007a0b\") " Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.599289 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle" (OuterVolumeSpecName: "bundle") pod "b4942197-db6e-4bb6-af6d-24694a007a0b" (UID: "b4942197-db6e-4bb6-af6d-24694a007a0b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.603528 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k" (OuterVolumeSpecName: "kube-api-access-fh52k") pod "b4942197-db6e-4bb6-af6d-24694a007a0b" (UID: "b4942197-db6e-4bb6-af6d-24694a007a0b"). InnerVolumeSpecName "kube-api-access-fh52k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.608469 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util" (OuterVolumeSpecName: "util") pod "b4942197-db6e-4bb6-af6d-24694a007a0b" (UID: "b4942197-db6e-4bb6-af6d-24694a007a0b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.698692 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh52k\" (UniqueName: \"kubernetes.io/projected/b4942197-db6e-4bb6-af6d-24694a007a0b-kube-api-access-fh52k\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.698732 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:42 crc kubenswrapper[4902]: I0121 14:47:42.698741 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4942197-db6e-4bb6-af6d-24694a007a0b-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.340323 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.340326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj" event={"ID":"b4942197-db6e-4bb6-af6d-24694a007a0b","Type":"ContainerDied","Data":"6962038ec7ce5218af0f0011e64727b716daf49cb4483a70fdeb952e8753254a"} Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.340368 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6962038ec7ce5218af0f0011e64727b716daf49cb4483a70fdeb952e8753254a" Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.342035 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerStarted","Data":"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2"} Jan 21 14:47:43 crc kubenswrapper[4902]: I0121 14:47:43.369435 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzz8m" podStartSLOduration=3.231805329 podStartE2EDuration="7.369416292s" podCreationTimestamp="2026-01-21 14:47:36 +0000 UTC" firstStartedPulling="2026-01-21 14:47:38.625730859 +0000 UTC m=+820.702563888" lastFinishedPulling="2026-01-21 14:47:42.763341812 +0000 UTC m=+824.840174851" observedRunningTime="2026-01-21 14:47:43.365001478 +0000 UTC m=+825.441834527" watchObservedRunningTime="2026-01-21 14:47:43.369416292 +0000 UTC m=+825.446249321" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.035317 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb"] Jan 21 14:47:46 crc kubenswrapper[4902]: E0121 14:47:46.036500 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="extract" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.036579 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="extract" Jan 21 14:47:46 crc kubenswrapper[4902]: E0121 14:47:46.036659 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="util" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.036737 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="util" Jan 21 14:47:46 crc kubenswrapper[4902]: E0121 14:47:46.036819 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="pull" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.036890 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="pull" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.037145 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4942197-db6e-4bb6-af6d-24694a007a0b" containerName="extract" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.037597 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.039218 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.039440 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8fbs6" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.039480 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.125396 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb"] Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.141511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qph\" (UniqueName: \"kubernetes.io/projected/0d7d00e8-0d08-48df-82d4-1427b10adbf9-kube-api-access-58qph\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.141766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d7d00e8-0d08-48df-82d4-1427b10adbf9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.242885 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qph\" (UniqueName: \"kubernetes.io/projected/0d7d00e8-0d08-48df-82d4-1427b10adbf9-kube-api-access-58qph\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.242951 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d7d00e8-0d08-48df-82d4-1427b10adbf9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.243480 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d7d00e8-0d08-48df-82d4-1427b10adbf9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.269268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qph\" (UniqueName: \"kubernetes.io/projected/0d7d00e8-0d08-48df-82d4-1427b10adbf9-kube-api-access-58qph\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lppwb\" (UID: \"0d7d00e8-0d08-48df-82d4-1427b10adbf9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.352410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.615474 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb"] Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.844833 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:46 crc kubenswrapper[4902]: I0121 14:47:46.844873 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.365526 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" event={"ID":"0d7d00e8-0d08-48df-82d4-1427b10adbf9","Type":"ContainerStarted","Data":"bc17a62a4053588b66f8088ee78e88091ced66d4df1ed9671a0d220865ec3e2d"} Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.769976 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.770076 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:47:47 crc kubenswrapper[4902]: I0121 14:47:47.908534 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzz8m" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" probeResult="failure" output=< Jan 21 14:47:47 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 14:47:47 crc kubenswrapper[4902]: > Jan 21 14:47:49 crc kubenswrapper[4902]: I0121 14:47:49.549446 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xpzj8" Jan 21 14:47:56 crc kubenswrapper[4902]: I0121 14:47:56.886543 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:56 crc kubenswrapper[4902]: I0121 14:47:56.943783 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.105656 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.450594 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" event={"ID":"0d7d00e8-0d08-48df-82d4-1427b10adbf9","Type":"ContainerStarted","Data":"2a082becd3fb0d88001474c2e54e6d3ff7b369a02a05e06721cfed6d710bdf6e"} Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.450807 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzz8m" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" containerID="cri-o://9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" gracePeriod=2 Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.480840 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lppwb" podStartSLOduration=1.4153507570000001 podStartE2EDuration="12.480817695s" podCreationTimestamp="2026-01-21 14:47:46 +0000 UTC" firstStartedPulling="2026-01-21 14:47:46.6384348 +0000 UTC m=+828.715267829" lastFinishedPulling="2026-01-21 14:47:57.703901728 +0000 UTC m=+839.780734767" observedRunningTime="2026-01-21 14:47:58.476949857 +0000 UTC m=+840.553782906" watchObservedRunningTime="2026-01-21 14:47:58.480817695 +0000 UTC m=+840.557650724" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.825371 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.952463 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") pod \"89a05951-5e50-40de-92cd-2e064b9251f6\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.952551 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") pod \"89a05951-5e50-40de-92cd-2e064b9251f6\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.952668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") pod \"89a05951-5e50-40de-92cd-2e064b9251f6\" (UID: \"89a05951-5e50-40de-92cd-2e064b9251f6\") " Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.955806 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities" (OuterVolumeSpecName: "utilities") pod "89a05951-5e50-40de-92cd-2e064b9251f6" (UID: "89a05951-5e50-40de-92cd-2e064b9251f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:58 crc kubenswrapper[4902]: I0121 14:47:58.960414 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc" (OuterVolumeSpecName: "kube-api-access-zp7dc") pod "89a05951-5e50-40de-92cd-2e064b9251f6" (UID: "89a05951-5e50-40de-92cd-2e064b9251f6"). InnerVolumeSpecName "kube-api-access-zp7dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.055054 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.055090 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp7dc\" (UniqueName: \"kubernetes.io/projected/89a05951-5e50-40de-92cd-2e064b9251f6-kube-api-access-zp7dc\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.072200 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89a05951-5e50-40de-92cd-2e064b9251f6" (UID: "89a05951-5e50-40de-92cd-2e064b9251f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.156697 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a05951-5e50-40de-92cd-2e064b9251f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457776 4902 generic.go:334] "Generic (PLEG): container finished" podID="89a05951-5e50-40de-92cd-2e064b9251f6" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" exitCode=0 Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2"} Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457862 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzz8m" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzz8m" event={"ID":"89a05951-5e50-40de-92cd-2e064b9251f6","Type":"ContainerDied","Data":"9ed3a42d2948444a026315cf05bef1569614af4c0ee778024e46e86063755697"} Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.457926 4902 scope.go:117] "RemoveContainer" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.475173 4902 scope.go:117] "RemoveContainer" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.492612 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.497679 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzz8m"] Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.502074 4902 scope.go:117] "RemoveContainer" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.519294 4902 scope.go:117] "RemoveContainer" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" Jan 21 14:47:59 crc kubenswrapper[4902]: E0121 14:47:59.519719 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2\": container with ID starting with 9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2 not found: ID does not exist" containerID="9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.519768 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2"} err="failed to get container status \"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2\": rpc error: code = NotFound desc = could not find container \"9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2\": container with ID starting with 9cde000cf043ff734560c74e46f41047e9af903b4a28c19697f9f19eaaa25fb2 not found: ID does not exist" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.519807 4902 scope.go:117] "RemoveContainer" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" Jan 21 14:47:59 crc kubenswrapper[4902]: E0121 14:47:59.520459 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8\": container with ID starting with 989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8 not found: ID does not exist" containerID="989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.520506 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8"} err="failed to get container status \"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8\": rpc error: code = NotFound desc = could not find container \"989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8\": container with ID starting with 989671a55d0c7614bf83ff9827e3fbd009059194c2b1426b5974e1f4db0fb8b8 not found: ID does not exist" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.520538 4902 scope.go:117] "RemoveContainer" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" Jan 21 14:47:59 crc kubenswrapper[4902]: E0121 14:47:59.520843 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9\": container with ID starting with 3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9 not found: ID does not exist" containerID="3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9" Jan 21 14:47:59 crc kubenswrapper[4902]: I0121 14:47:59.520866 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9"} err="failed to get container status \"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9\": rpc error: code = NotFound desc = could not find container \"3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9\": container with ID starting with 3d196365e1bdeb6ab716f591fdd270c7c2eac6e0fbbe252b78e6f91b6f50d1b9 not found: ID does not exist" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.302279 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" path="/var/lib/kubelet/pods/89a05951-5e50-40de-92cd-2e064b9251f6/volumes" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.933793 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p2522"] Jan 21 14:48:00 crc kubenswrapper[4902]: E0121 14:48:00.934086 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-content" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934101 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-content" Jan 21 14:48:00 crc kubenswrapper[4902]: E0121 14:48:00.934114 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934122 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" Jan 21 14:48:00 crc kubenswrapper[4902]: E0121 14:48:00.934149 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-utilities" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934159 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="extract-utilities" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.934285 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a05951-5e50-40de-92cd-2e064b9251f6" containerName="registry-server" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.935397 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.938324 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.938849 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.944990 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9np5k" Jan 21 14:48:00 crc kubenswrapper[4902]: I0121 14:48:00.945146 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p2522"] Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.086108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.086180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgr6d\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-kube-api-access-dgr6d\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.187256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.187317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgr6d\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-kube-api-access-dgr6d\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.207839 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgr6d\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-kube-api-access-dgr6d\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.211412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9093daac-4fd2-4075-8e73-d358cd885c3c-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-p2522\" (UID: \"9093daac-4fd2-4075-8e73-d358cd885c3c\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.303781 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:01 crc kubenswrapper[4902]: I0121 14:48:01.493737 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-p2522"] Jan 21 14:48:01 crc kubenswrapper[4902]: W0121 14:48:01.505526 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9093daac_4fd2_4075_8e73_d358cd885c3c.slice/crio-38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813 WatchSource:0}: Error finding container 38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813: Status 404 returned error can't find the container with id 38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813 Jan 21 14:48:02 crc kubenswrapper[4902]: I0121 14:48:02.478101 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" event={"ID":"9093daac-4fd2-4075-8e73-d358cd885c3c","Type":"ContainerStarted","Data":"38ad53a80d5cca05802d38a7a845a8fd4913860bd8d324fd5d8e7167b91f8813"} Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.485815 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-llf68"] Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.487088 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.490445 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-llf68"] Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.493382 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6m2lz" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.643469 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h42m\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-kube-api-access-7h42m\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.643518 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.745252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h42m\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-kube-api-access-7h42m\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.745651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.777239 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h42m\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-kube-api-access-7h42m\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.778506 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21799993-1de7-4aef-9cfa-c132249ecf74-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-llf68\" (UID: \"21799993-1de7-4aef-9cfa-c132249ecf74\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:04 crc kubenswrapper[4902]: I0121 14:48:04.826826 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" Jan 21 14:48:05 crc kubenswrapper[4902]: I0121 14:48:05.212717 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-llf68"] Jan 21 14:48:05 crc kubenswrapper[4902]: W0121 14:48:05.219617 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21799993_1de7_4aef_9cfa_c132249ecf74.slice/crio-b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910 WatchSource:0}: Error finding container b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910: Status 404 returned error can't find the container with id b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910 Jan 21 14:48:05 crc kubenswrapper[4902]: I0121 14:48:05.510851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" event={"ID":"21799993-1de7-4aef-9cfa-c132249ecf74","Type":"ContainerStarted","Data":"b6821c4081e9b32522ffad5787b7c27e5741889901baf84d22740f4f20de8910"} Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.535871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" event={"ID":"21799993-1de7-4aef-9cfa-c132249ecf74","Type":"ContainerStarted","Data":"475de260633d7bc549a9d606755d85a160d93bf66693fbc3c171cfdc76134fa5"} Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.541313 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" event={"ID":"9093daac-4fd2-4075-8e73-d358cd885c3c","Type":"ContainerStarted","Data":"d67cf11162782bdcacfb014ce4cc0261c7e396c13e5ad0547d2ce7520c149dca"} Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.541488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.553384 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-llf68" podStartSLOduration=1.460956391 podStartE2EDuration="5.553367559s" podCreationTimestamp="2026-01-21 14:48:04 +0000 UTC" firstStartedPulling="2026-01-21 14:48:05.221832294 +0000 UTC m=+847.298665323" lastFinishedPulling="2026-01-21 14:48:09.314243462 +0000 UTC m=+851.391076491" observedRunningTime="2026-01-21 14:48:09.55046345 +0000 UTC m=+851.627296489" watchObservedRunningTime="2026-01-21 14:48:09.553367559 +0000 UTC m=+851.630200598" Jan 21 14:48:09 crc kubenswrapper[4902]: I0121 14:48:09.578611 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" podStartSLOduration=1.749928361 podStartE2EDuration="9.578588783s" podCreationTimestamp="2026-01-21 14:48:00 +0000 UTC" firstStartedPulling="2026-01-21 14:48:01.507635037 +0000 UTC m=+843.584468066" lastFinishedPulling="2026-01-21 14:48:09.336295459 +0000 UTC m=+851.413128488" observedRunningTime="2026-01-21 14:48:09.574366447 +0000 UTC m=+851.651199496" watchObservedRunningTime="2026-01-21 14:48:09.578588783 +0000 UTC m=+851.655421822" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.342603 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4cd6m"] Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.343441 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.346543 4902 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lczzx" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.359390 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4cd6m"] Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.490677 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-bound-sa-token\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.491290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpgl\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-kube-api-access-djpgl\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.592973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-bound-sa-token\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.593010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djpgl\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-kube-api-access-djpgl\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.617133 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpgl\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-kube-api-access-djpgl\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.623809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/12dae6d4-a2b1-4ef8-ae74-369697c9172b-bound-sa-token\") pod \"cert-manager-86cb77c54b-4cd6m\" (UID: \"12dae6d4-a2b1-4ef8-ae74-369697c9172b\") " pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:14 crc kubenswrapper[4902]: I0121 14:48:14.720602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.120777 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4cd6m"] Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.597915 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" event={"ID":"12dae6d4-a2b1-4ef8-ae74-369697c9172b","Type":"ContainerStarted","Data":"14a05c26a1f2ba10b45056581c18b90ba5160d665c6fc1cce2ec1f89dc7f4fe2"} Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.598332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" event={"ID":"12dae6d4-a2b1-4ef8-ae74-369697c9172b","Type":"ContainerStarted","Data":"8ce00bffc240c82186d2c71f5b5500b76190af8418b7e930f030f584e9c68ae9"} Jan 21 14:48:15 crc kubenswrapper[4902]: I0121 14:48:15.616667 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-4cd6m" podStartSLOduration=1.6166471919999998 podStartE2EDuration="1.616647192s" podCreationTimestamp="2026-01-21 14:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:48:15.614104932 +0000 UTC m=+857.690937981" watchObservedRunningTime="2026-01-21 14:48:15.616647192 +0000 UTC m=+857.693480221" Jan 21 14:48:16 crc kubenswrapper[4902]: I0121 14:48:16.307453 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-p2522" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.769298 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.769609 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.769658 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.770149 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:48:17 crc kubenswrapper[4902]: I0121 14:48:17.770206 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca" gracePeriod=600 Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.622278 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca" exitCode=0 Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.622361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca"} Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.623141 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1"} Jan 21 14:48:18 crc kubenswrapper[4902]: I0121 14:48:18.623184 4902 scope.go:117] "RemoveContainer" containerID="1d2da13e9ad46e483ffa722259ad0a8b94b5c2e16fcacdd89045b1d8ac2afd0e" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.844705 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.846196 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.849237 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.849951 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.850277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-w7v62" Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.870738 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:19 crc kubenswrapper[4902]: I0121 14:48:19.961184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"openstack-operator-index-9nh8f\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.062370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"openstack-operator-index-9nh8f\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.086483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"openstack-operator-index-9nh8f\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.172733 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:20 crc kubenswrapper[4902]: I0121 14:48:20.658853 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:21 crc kubenswrapper[4902]: I0121 14:48:21.650364 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerStarted","Data":"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d"} Jan 21 14:48:21 crc kubenswrapper[4902]: I0121 14:48:21.650809 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerStarted","Data":"8701aa92f151895298f0a19d3131c4e6a3110a63aaae0aaedd69e0fff8da5ce8"} Jan 21 14:48:21 crc kubenswrapper[4902]: I0121 14:48:21.663753 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9nh8f" podStartSLOduration=1.836481289 podStartE2EDuration="2.663731062s" podCreationTimestamp="2026-01-21 14:48:19 +0000 UTC" firstStartedPulling="2026-01-21 14:48:20.677562948 +0000 UTC m=+862.754395977" lastFinishedPulling="2026-01-21 14:48:21.504812721 +0000 UTC m=+863.581645750" observedRunningTime="2026-01-21 14:48:21.662107558 +0000 UTC m=+863.738940587" watchObservedRunningTime="2026-01-21 14:48:21.663731062 +0000 UTC m=+863.740564101" Jan 21 14:48:23 crc kubenswrapper[4902]: I0121 14:48:23.621417 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:23 crc kubenswrapper[4902]: I0121 14:48:23.661236 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9nh8f" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" containerID="cri-o://96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" gracePeriod=2 Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.026935 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.125226 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") pod \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\" (UID: \"bac350c4-c5e3-4fa1-8a4d-88ba0729a776\") " Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.131371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z" (OuterVolumeSpecName: "kube-api-access-q9v5z") pod "bac350c4-c5e3-4fa1-8a4d-88ba0729a776" (UID: "bac350c4-c5e3-4fa1-8a4d-88ba0729a776"). InnerVolumeSpecName "kube-api-access-q9v5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.227062 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9v5z\" (UniqueName: \"kubernetes.io/projected/bac350c4-c5e3-4fa1-8a4d-88ba0729a776-kube-api-access-q9v5z\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.435524 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dp8mf"] Jan 21 14:48:24 crc kubenswrapper[4902]: E0121 14:48:24.436494 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.436702 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.437183 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerName="registry-server" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.438301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.455697 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dp8mf"] Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.532538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltv2\" (UniqueName: \"kubernetes.io/projected/2d05d6f5-a861-4117-b4a0-00e98da2fe57-kube-api-access-gltv2\") pod \"openstack-operator-index-dp8mf\" (UID: \"2d05d6f5-a861-4117-b4a0-00e98da2fe57\") " pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.634535 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltv2\" (UniqueName: \"kubernetes.io/projected/2d05d6f5-a861-4117-b4a0-00e98da2fe57-kube-api-access-gltv2\") pod \"openstack-operator-index-dp8mf\" (UID: \"2d05d6f5-a861-4117-b4a0-00e98da2fe57\") " pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.660412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltv2\" (UniqueName: \"kubernetes.io/projected/2d05d6f5-a861-4117-b4a0-00e98da2fe57-kube-api-access-gltv2\") pod \"openstack-operator-index-dp8mf\" (UID: \"2d05d6f5-a861-4117-b4a0-00e98da2fe57\") " pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668831 4902 generic.go:334] "Generic (PLEG): container finished" podID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" exitCode=0 Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668877 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerDied","Data":"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d"} Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668905 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9nh8f" event={"ID":"bac350c4-c5e3-4fa1-8a4d-88ba0729a776","Type":"ContainerDied","Data":"8701aa92f151895298f0a19d3131c4e6a3110a63aaae0aaedd69e0fff8da5ce8"} Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.668923 4902 scope.go:117] "RemoveContainer" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.669044 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9nh8f" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.707333 4902 scope.go:117] "RemoveContainer" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" Jan 21 14:48:24 crc kubenswrapper[4902]: E0121 14:48:24.708008 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d\": container with ID starting with 96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d not found: ID does not exist" containerID="96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.708062 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d"} err="failed to get container status \"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d\": rpc error: code = NotFound desc = could not find container \"96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d\": container with ID starting with 96c0f2b20cb788250b2c5711131b3478c524aa685f4c8af103ab50bca7649c9d not found: ID does not exist" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.712941 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.720713 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9nh8f"] Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.775259 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:24 crc kubenswrapper[4902]: I0121 14:48:24.957149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dp8mf"] Jan 21 14:48:24 crc kubenswrapper[4902]: W0121 14:48:24.962417 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d05d6f5_a861_4117_b4a0_00e98da2fe57.slice/crio-892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624 WatchSource:0}: Error finding container 892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624: Status 404 returned error can't find the container with id 892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624 Jan 21 14:48:25 crc kubenswrapper[4902]: I0121 14:48:25.676631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dp8mf" event={"ID":"2d05d6f5-a861-4117-b4a0-00e98da2fe57","Type":"ContainerStarted","Data":"bfdb1bdf9425993b52dca8ccf0e0a670eeaa8e6ee6394f9cfc2852d0f2226b0c"} Jan 21 14:48:25 crc kubenswrapper[4902]: I0121 14:48:25.676929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dp8mf" event={"ID":"2d05d6f5-a861-4117-b4a0-00e98da2fe57","Type":"ContainerStarted","Data":"892b16d9b552981852dbae22ecf0709c1ddbb9dec02bc25f44045253ac7f1624"} Jan 21 14:48:25 crc kubenswrapper[4902]: I0121 14:48:25.688914 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dp8mf" podStartSLOduration=1.244428336 podStartE2EDuration="1.688897371s" podCreationTimestamp="2026-01-21 14:48:24 +0000 UTC" firstStartedPulling="2026-01-21 14:48:24.965531705 +0000 UTC m=+867.042364734" lastFinishedPulling="2026-01-21 14:48:25.41000072 +0000 UTC m=+867.486833769" observedRunningTime="2026-01-21 14:48:25.688260653 +0000 UTC m=+867.765093682" watchObservedRunningTime="2026-01-21 14:48:25.688897371 +0000 UTC m=+867.765730400" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.231623 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.233017 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.249906 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.340397 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac350c4-c5e3-4fa1-8a4d-88ba0729a776" path="/var/lib/kubelet/pods/bac350c4-c5e3-4fa1-8a4d-88ba0729a776/volumes" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.358878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.358992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.359077 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.460672 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461074 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461220 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.461543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.484701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"certified-operators-7lxbr\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.548033 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:26 crc kubenswrapper[4902]: I0121 14:48:26.828755 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:27 crc kubenswrapper[4902]: I0121 14:48:27.690505 4902 generic.go:334] "Generic (PLEG): container finished" podID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerID="e6887d84ac19036b8225ed94be4c49f586fb8bc64a8e5b4a5855f54696ee47ce" exitCode=0 Jan 21 14:48:27 crc kubenswrapper[4902]: I0121 14:48:27.690620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"e6887d84ac19036b8225ed94be4c49f586fb8bc64a8e5b4a5855f54696ee47ce"} Jan 21 14:48:27 crc kubenswrapper[4902]: I0121 14:48:27.691312 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerStarted","Data":"64d3c73fd6fc6e7756ced5576e58d05676abc9945d3ce06fd36084d244f71002"} Jan 21 14:48:28 crc kubenswrapper[4902]: I0121 14:48:28.704850 4902 generic.go:334] "Generic (PLEG): container finished" podID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerID="bfa8270d976a4fc7059e932914709e76561f9cb9d254e260afd041df52378c93" exitCode=0 Jan 21 14:48:28 crc kubenswrapper[4902]: I0121 14:48:28.704936 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"bfa8270d976a4fc7059e932914709e76561f9cb9d254e260afd041df52378c93"} Jan 21 14:48:29 crc kubenswrapper[4902]: I0121 14:48:29.713640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerStarted","Data":"f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2"} Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.775941 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.776527 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.801868 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:34 crc kubenswrapper[4902]: I0121 14:48:34.815990 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lxbr" podStartSLOduration=7.429198801 podStartE2EDuration="8.815957513s" podCreationTimestamp="2026-01-21 14:48:26 +0000 UTC" firstStartedPulling="2026-01-21 14:48:27.69259773 +0000 UTC m=+869.769430759" lastFinishedPulling="2026-01-21 14:48:29.079356422 +0000 UTC m=+871.156189471" observedRunningTime="2026-01-21 14:48:29.736852167 +0000 UTC m=+871.813685186" watchObservedRunningTime="2026-01-21 14:48:34.815957513 +0000 UTC m=+876.892790542" Jan 21 14:48:35 crc kubenswrapper[4902]: I0121 14:48:35.794958 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dp8mf" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.548463 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.548865 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.591789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.795241 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.866193 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv"] Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.867521 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.871110 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-w9277" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.874513 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv"] Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.994130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.994184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:36 crc kubenswrapper[4902]: I0121 14:48:36.994210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.095817 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.095869 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.095895 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.096438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.096709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.120878 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.190249 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.424184 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv"] Jan 21 14:48:37 crc kubenswrapper[4902]: I0121 14:48:37.784659 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerStarted","Data":"73c301ca2159c0c33d1998bfdd53bb9012794bc26bd074af6c573b20ff2d743a"} Jan 21 14:48:38 crc kubenswrapper[4902]: I0121 14:48:38.792393 4902 generic.go:334] "Generic (PLEG): container finished" podID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerID="6850fc9e636e2b34d7f029fff57d259c26e305ce55ddc05019a045301860f0eb" exitCode=0 Jan 21 14:48:38 crc kubenswrapper[4902]: I0121 14:48:38.792455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"6850fc9e636e2b34d7f029fff57d259c26e305ce55ddc05019a045301860f0eb"} Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.029686 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.031471 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.042626 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.133966 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.134144 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.134184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235263 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235346 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.235929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.236015 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.254926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"community-operators-gxj9j\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.352395 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.423848 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.424071 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lxbr" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" containerID="cri-o://f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2" gracePeriod=2 Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.802998 4902 generic.go:334] "Generic (PLEG): container finished" podID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerID="bcb171e1d455922261e79a352408e9c0c7a85f4fccfdacbb80bea720c98a917f" exitCode=0 Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.803242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"bcb171e1d455922261e79a352408e9c0c7a85f4fccfdacbb80bea720c98a917f"} Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.805957 4902 generic.go:334] "Generic (PLEG): container finished" podID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerID="f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2" exitCode=0 Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.805987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2"} Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.892963 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.924101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.986996 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") pod \"0eb25c9d-2c71-4c7c-892b-bce263563735\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.987115 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") pod \"0eb25c9d-2c71-4c7c-892b-bce263563735\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.987181 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") pod \"0eb25c9d-2c71-4c7c-892b-bce263563735\" (UID: \"0eb25c9d-2c71-4c7c-892b-bce263563735\") " Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.988214 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities" (OuterVolumeSpecName: "utilities") pod "0eb25c9d-2c71-4c7c-892b-bce263563735" (UID: "0eb25c9d-2c71-4c7c-892b-bce263563735"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:40 crc kubenswrapper[4902]: I0121 14:48:40.994284 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw" (OuterVolumeSpecName: "kube-api-access-22vvw") pod "0eb25c9d-2c71-4c7c-892b-bce263563735" (UID: "0eb25c9d-2c71-4c7c-892b-bce263563735"). InnerVolumeSpecName "kube-api-access-22vvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.039976 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eb25c9d-2c71-4c7c-892b-bce263563735" (UID: "0eb25c9d-2c71-4c7c-892b-bce263563735"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.088671 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22vvw\" (UniqueName: \"kubernetes.io/projected/0eb25c9d-2c71-4c7c-892b-bce263563735-kube-api-access-22vvw\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.088976 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.088988 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eb25c9d-2c71-4c7c-892b-bce263563735-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.816229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lxbr" event={"ID":"0eb25c9d-2c71-4c7c-892b-bce263563735","Type":"ContainerDied","Data":"64d3c73fd6fc6e7756ced5576e58d05676abc9945d3ce06fd36084d244f71002"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.816276 4902 scope.go:117] "RemoveContainer" containerID="f493d4635a60733e8d14df746c92f5036e8289697348e22bd097ef7a88cc56c2" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.816278 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lxbr" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.818379 4902 generic.go:334] "Generic (PLEG): container finished" podID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerID="58e01dcf152baa0605dc7a6d72585435564e75b4f638f76712ac46553f8eb051" exitCode=0 Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.818455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"58e01dcf152baa0605dc7a6d72585435564e75b4f638f76712ac46553f8eb051"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.818507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerStarted","Data":"7cfb471925da61d1c73706b42117dc901586ec198cdb9360f6a17d4a36f443f6"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.821545 4902 generic.go:334] "Generic (PLEG): container finished" podID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerID="a7fddc682f8d31b3ea32932b92487e1e92d53984d6e4fe5fb67526e9a2a56398" exitCode=0 Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.821577 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"a7fddc682f8d31b3ea32932b92487e1e92d53984d6e4fe5fb67526e9a2a56398"} Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.844351 4902 scope.go:117] "RemoveContainer" containerID="bfa8270d976a4fc7059e932914709e76561f9cb9d254e260afd041df52378c93" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.889427 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.892284 4902 scope.go:117] "RemoveContainer" containerID="e6887d84ac19036b8225ed94be4c49f586fb8bc64a8e5b4a5855f54696ee47ce" Jan 21 14:48:41 crc kubenswrapper[4902]: I0121 14:48:41.894767 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lxbr"] Jan 21 14:48:42 crc kubenswrapper[4902]: I0121 14:48:42.305253 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" path="/var/lib/kubelet/pods/0eb25c9d-2c71-4c7c-892b-bce263563735/volumes" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.086292 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.112641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") pod \"f7119ded-6a7d-468d-acc4-9d1d1045656c\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.112740 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") pod \"f7119ded-6a7d-468d-acc4-9d1d1045656c\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.112828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") pod \"f7119ded-6a7d-468d-acc4-9d1d1045656c\" (UID: \"f7119ded-6a7d-468d-acc4-9d1d1045656c\") " Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.113515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle" (OuterVolumeSpecName: "bundle") pod "f7119ded-6a7d-468d-acc4-9d1d1045656c" (UID: "f7119ded-6a7d-468d-acc4-9d1d1045656c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.117677 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m" (OuterVolumeSpecName: "kube-api-access-hb27m") pod "f7119ded-6a7d-468d-acc4-9d1d1045656c" (UID: "f7119ded-6a7d-468d-acc4-9d1d1045656c"). InnerVolumeSpecName "kube-api-access-hb27m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.134820 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util" (OuterVolumeSpecName: "util") pod "f7119ded-6a7d-468d-acc4-9d1d1045656c" (UID: "f7119ded-6a7d-468d-acc4-9d1d1045656c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.214226 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.214267 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb27m\" (UniqueName: \"kubernetes.io/projected/f7119ded-6a7d-468d-acc4-9d1d1045656c-kube-api-access-hb27m\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.214277 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7119ded-6a7d-468d-acc4-9d1d1045656c-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.839514 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" event={"ID":"f7119ded-6a7d-468d-acc4-9d1d1045656c","Type":"ContainerDied","Data":"73c301ca2159c0c33d1998bfdd53bb9012794bc26bd074af6c573b20ff2d743a"} Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.839571 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73c301ca2159c0c33d1998bfdd53bb9012794bc26bd074af6c573b20ff2d743a" Jan 21 14:48:43 crc kubenswrapper[4902]: I0121 14:48:43.839588 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv" Jan 21 14:48:45 crc kubenswrapper[4902]: I0121 14:48:45.863156 4902 generic.go:334] "Generic (PLEG): container finished" podID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerID="4fdf41c6649aa4c07005037da3fcebbfd51eb386bde51c1b9445b8ed1c7ff15a" exitCode=0 Jan 21 14:48:45 crc kubenswrapper[4902]: I0121 14:48:45.863232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"4fdf41c6649aa4c07005037da3fcebbfd51eb386bde51c1b9445b8ed1c7ff15a"} Jan 21 14:48:46 crc kubenswrapper[4902]: I0121 14:48:46.877686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerStarted","Data":"16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc"} Jan 21 14:48:46 crc kubenswrapper[4902]: I0121 14:48:46.902442 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gxj9j" podStartSLOduration=2.425071646 podStartE2EDuration="6.902425232s" podCreationTimestamp="2026-01-21 14:48:40 +0000 UTC" firstStartedPulling="2026-01-21 14:48:41.821007021 +0000 UTC m=+883.897840090" lastFinishedPulling="2026-01-21 14:48:46.298360617 +0000 UTC m=+888.375193676" observedRunningTime="2026-01-21 14:48:46.897018353 +0000 UTC m=+888.973851382" watchObservedRunningTime="2026-01-21 14:48:46.902425232 +0000 UTC m=+888.979258261" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.867764 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp"] Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.867992 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="pull" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868003 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="pull" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868014 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="extract" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868019 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="extract" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868028 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-utilities" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868035 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-utilities" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868065 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="util" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868071 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="util" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868079 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868085 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" Jan 21 14:48:47 crc kubenswrapper[4902]: E0121 14:48:47.868098 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-content" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868105 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="extract-content" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868202 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb25c9d-2c71-4c7c-892b-bce263563735" containerName="registry-server" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868219 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7119ded-6a7d-468d-acc4-9d1d1045656c" containerName="extract" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.868607 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.870740 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4mwhd" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.886903 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx6fm\" (UniqueName: \"kubernetes.io/projected/1fbcd3da-0b42-4d83-b774-776f9d1612d5-kube-api-access-wx6fm\") pod \"openstack-operator-controller-init-6d4d7d8545-mvcwp\" (UID: \"1fbcd3da-0b42-4d83-b774-776f9d1612d5\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.900602 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp"] Jan 21 14:48:47 crc kubenswrapper[4902]: I0121 14:48:47.988726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx6fm\" (UniqueName: \"kubernetes.io/projected/1fbcd3da-0b42-4d83-b774-776f9d1612d5-kube-api-access-wx6fm\") pod \"openstack-operator-controller-init-6d4d7d8545-mvcwp\" (UID: \"1fbcd3da-0b42-4d83-b774-776f9d1612d5\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.015595 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx6fm\" (UniqueName: \"kubernetes.io/projected/1fbcd3da-0b42-4d83-b774-776f9d1612d5-kube-api-access-wx6fm\") pod \"openstack-operator-controller-init-6d4d7d8545-mvcwp\" (UID: \"1fbcd3da-0b42-4d83-b774-776f9d1612d5\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.185766 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.441920 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp"] Jan 21 14:48:48 crc kubenswrapper[4902]: W0121 14:48:48.450279 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbcd3da_0b42_4d83_b774_776f9d1612d5.slice/crio-3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e WatchSource:0}: Error finding container 3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e: Status 404 returned error can't find the container with id 3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e Jan 21 14:48:48 crc kubenswrapper[4902]: I0121 14:48:48.904707 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" event={"ID":"1fbcd3da-0b42-4d83-b774-776f9d1612d5","Type":"ContainerStarted","Data":"3d5b56f885ca84af16d276764383b7a1ea6a0dc8dfc38d758399af82d3ff862e"} Jan 21 14:48:50 crc kubenswrapper[4902]: I0121 14:48:50.352822 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:50 crc kubenswrapper[4902]: I0121 14:48:50.353098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:50 crc kubenswrapper[4902]: I0121 14:48:50.401651 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:48:52 crc kubenswrapper[4902]: I0121 14:48:52.950199 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" event={"ID":"1fbcd3da-0b42-4d83-b774-776f9d1612d5","Type":"ContainerStarted","Data":"f77e686e94457c430a7256fd0ca7237386d9140cf367a6391af6af32f945073c"} Jan 21 14:48:52 crc kubenswrapper[4902]: I0121 14:48:52.950678 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:52 crc kubenswrapper[4902]: I0121 14:48:52.984015 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" podStartSLOduration=1.714633363 podStartE2EDuration="5.983999249s" podCreationTimestamp="2026-01-21 14:48:47 +0000 UTC" firstStartedPulling="2026-01-21 14:48:48.451610009 +0000 UTC m=+890.528443038" lastFinishedPulling="2026-01-21 14:48:52.720975895 +0000 UTC m=+894.797808924" observedRunningTime="2026-01-21 14:48:52.977213953 +0000 UTC m=+895.054046992" watchObservedRunningTime="2026-01-21 14:48:52.983999249 +0000 UTC m=+895.060832268" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.244286 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.246324 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.265786 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.286742 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.286800 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.286818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.387757 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.387840 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.387864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.388564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.389194 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.415089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"redhat-marketplace-95v9q\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:54 crc kubenswrapper[4902]: I0121 14:48:54.605139 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.055971 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.969938 4902 generic.go:334] "Generic (PLEG): container finished" podID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerID="509905b535dba8cb83199b97a62088fcef4741892457d8fea33112bfc7c67c0e" exitCode=0 Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.969976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"509905b535dba8cb83199b97a62088fcef4741892457d8fea33112bfc7c67c0e"} Jan 21 14:48:55 crc kubenswrapper[4902]: I0121 14:48:55.970000 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerStarted","Data":"4cc0fcd4edb2759e0ad0548d3682b79a67201175a41ff2a22f6b50c3bd0b04f8"} Jan 21 14:48:57 crc kubenswrapper[4902]: I0121 14:48:57.986676 4902 generic.go:334] "Generic (PLEG): container finished" podID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerID="21e045e5022e7a8dfff24d47031d4a24b623242982635f3f26f4723fd15701b3" exitCode=0 Jan 21 14:48:57 crc kubenswrapper[4902]: I0121 14:48:57.986714 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"21e045e5022e7a8dfff24d47031d4a24b623242982635f3f26f4723fd15701b3"} Jan 21 14:48:58 crc kubenswrapper[4902]: I0121 14:48:58.188698 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-mvcwp" Jan 21 14:48:58 crc kubenswrapper[4902]: I0121 14:48:58.995702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerStarted","Data":"68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be"} Jan 21 14:48:59 crc kubenswrapper[4902]: I0121 14:48:59.025307 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-95v9q" podStartSLOduration=2.650801001 podStartE2EDuration="5.025292419s" podCreationTimestamp="2026-01-21 14:48:54 +0000 UTC" firstStartedPulling="2026-01-21 14:48:55.971317543 +0000 UTC m=+898.048150572" lastFinishedPulling="2026-01-21 14:48:58.345808961 +0000 UTC m=+900.422641990" observedRunningTime="2026-01-21 14:48:59.023715736 +0000 UTC m=+901.100548765" watchObservedRunningTime="2026-01-21 14:48:59.025292419 +0000 UTC m=+901.102125448" Jan 21 14:49:00 crc kubenswrapper[4902]: I0121 14:49:00.431613 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.020535 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.021794 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gxj9j" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" containerID="cri-o://16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc" gracePeriod=2 Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.605549 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.605796 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:04 crc kubenswrapper[4902]: I0121 14:49:04.661979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.056036 4902 generic.go:334] "Generic (PLEG): container finished" podID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerID="16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc" exitCode=0 Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.056089 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc"} Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.117557 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.502215 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.657823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") pod \"fabdb2a2-7f8e-40a4-a150-6bb794482383\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.657910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") pod \"fabdb2a2-7f8e-40a4-a150-6bb794482383\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.657981 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") pod \"fabdb2a2-7f8e-40a4-a150-6bb794482383\" (UID: \"fabdb2a2-7f8e-40a4-a150-6bb794482383\") " Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.659005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities" (OuterVolumeSpecName: "utilities") pod "fabdb2a2-7f8e-40a4-a150-6bb794482383" (UID: "fabdb2a2-7f8e-40a4-a150-6bb794482383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.664991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4" (OuterVolumeSpecName: "kube-api-access-rzph4") pod "fabdb2a2-7f8e-40a4-a150-6bb794482383" (UID: "fabdb2a2-7f8e-40a4-a150-6bb794482383"). InnerVolumeSpecName "kube-api-access-rzph4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.704920 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabdb2a2-7f8e-40a4-a150-6bb794482383" (UID: "fabdb2a2-7f8e-40a4-a150-6bb794482383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.759806 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzph4\" (UniqueName: \"kubernetes.io/projected/fabdb2a2-7f8e-40a4-a150-6bb794482383-kube-api-access-rzph4\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.759846 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:05 crc kubenswrapper[4902]: I0121 14:49:05.759856 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabdb2a2-7f8e-40a4-a150-6bb794482383-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.066065 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gxj9j" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.066219 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gxj9j" event={"ID":"fabdb2a2-7f8e-40a4-a150-6bb794482383","Type":"ContainerDied","Data":"7cfb471925da61d1c73706b42117dc901586ec198cdb9360f6a17d4a36f443f6"} Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.066877 4902 scope.go:117] "RemoveContainer" containerID="16997efae7f7e817fb4dcb1190338876eb5cd2dcf755844c0ab32fc4fbef23fc" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.091713 4902 scope.go:117] "RemoveContainer" containerID="4fdf41c6649aa4c07005037da3fcebbfd51eb386bde51c1b9445b8ed1c7ff15a" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.105375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.111352 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gxj9j"] Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.122823 4902 scope.go:117] "RemoveContainer" containerID="58e01dcf152baa0605dc7a6d72585435564e75b4f638f76712ac46553f8eb051" Jan 21 14:49:06 crc kubenswrapper[4902]: I0121 14:49:06.303836 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" path="/var/lib/kubelet/pods/fabdb2a2-7f8e-40a4-a150-6bb794482383/volumes" Jan 21 14:49:08 crc kubenswrapper[4902]: I0121 14:49:08.221286 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:49:08 crc kubenswrapper[4902]: I0121 14:49:08.221503 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-95v9q" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" containerID="cri-o://68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be" gracePeriod=2 Jan 21 14:49:09 crc kubenswrapper[4902]: I0121 14:49:09.105619 4902 generic.go:334] "Generic (PLEG): container finished" podID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerID="68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be" exitCode=0 Jan 21 14:49:09 crc kubenswrapper[4902]: I0121 14:49:09.105708 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be"} Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.226310 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.319713 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") pod \"de16538d-0e36-4daf-8621-a819da9a3cb6\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.320101 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") pod \"de16538d-0e36-4daf-8621-a819da9a3cb6\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.320286 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") pod \"de16538d-0e36-4daf-8621-a819da9a3cb6\" (UID: \"de16538d-0e36-4daf-8621-a819da9a3cb6\") " Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.320596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities" (OuterVolumeSpecName: "utilities") pod "de16538d-0e36-4daf-8621-a819da9a3cb6" (UID: "de16538d-0e36-4daf-8621-a819da9a3cb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.321490 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.324891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr" (OuterVolumeSpecName: "kube-api-access-wzmdr") pod "de16538d-0e36-4daf-8621-a819da9a3cb6" (UID: "de16538d-0e36-4daf-8621-a819da9a3cb6"). InnerVolumeSpecName "kube-api-access-wzmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.345906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de16538d-0e36-4daf-8621-a819da9a3cb6" (UID: "de16538d-0e36-4daf-8621-a819da9a3cb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.422347 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de16538d-0e36-4daf-8621-a819da9a3cb6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:10 crc kubenswrapper[4902]: I0121 14:49:10.422384 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmdr\" (UniqueName: \"kubernetes.io/projected/de16538d-0e36-4daf-8621-a819da9a3cb6-kube-api-access-wzmdr\") on node \"crc\" DevicePath \"\"" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.121330 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-95v9q" event={"ID":"de16538d-0e36-4daf-8621-a819da9a3cb6","Type":"ContainerDied","Data":"4cc0fcd4edb2759e0ad0548d3682b79a67201175a41ff2a22f6b50c3bd0b04f8"} Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.121646 4902 scope.go:117] "RemoveContainer" containerID="68b24c9e897b0a83b5bad964b916be23b70640bd8873d28be222c9b5977c07be" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.121757 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-95v9q" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.141825 4902 scope.go:117] "RemoveContainer" containerID="21e045e5022e7a8dfff24d47031d4a24b623242982635f3f26f4723fd15701b3" Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.161272 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.168367 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-95v9q"] Jan 21 14:49:11 crc kubenswrapper[4902]: I0121 14:49:11.180949 4902 scope.go:117] "RemoveContainer" containerID="509905b535dba8cb83199b97a62088fcef4741892457d8fea33112bfc7c67c0e" Jan 21 14:49:12 crc kubenswrapper[4902]: I0121 14:49:12.301289 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" path="/var/lib/kubelet/pods/de16538d-0e36-4daf-8621-a819da9a3cb6/volumes" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.322552 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd"] Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323066 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323083 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323099 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323107 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323119 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323129 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-content" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323138 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323145 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="extract-utilities" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323155 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323162 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.323171 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323178 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323304 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="de16538d-0e36-4daf-8621-a819da9a3cb6" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323317 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabdb2a2-7f8e-40a4-a150-6bb794482383" containerName="registry-server" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.323823 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.326011 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zxs6r" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.334620 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.335539 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.338635 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-k94tp" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.347014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.355888 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.374887 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.375560 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gffs4"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.376134 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.376631 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.381231 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qbtkd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.381462 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2tlm5" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.392652 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.452116 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gffs4"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.459899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hl4\" (UniqueName: \"kubernetes.io/projected/3c1e8b4d-a47d-4a6e-be63-bfc41d04d964-kube-api-access-r9hl4\") pod \"glance-operator-controller-manager-c6994669c-gffs4\" (UID: \"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.460006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbr4\" (UniqueName: \"kubernetes.io/projected/66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e-kube-api-access-bjbr4\") pod \"barbican-operator-controller-manager-7ddb5c749-j6fwd\" (UID: \"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.460120 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngmh\" (UniqueName: \"kubernetes.io/projected/b924ea4f-71c9-4f42-aa0a-a4945ea589e3-kube-api-access-bngmh\") pod \"cinder-operator-controller-manager-9b68f5989-nh8zr\" (UID: \"b924ea4f-71c9-4f42-aa0a-a4945ea589e3\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.478256 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.481535 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.486481 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wmkds" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.511747 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.518287 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.537506 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jwshd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561297 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcjk\" (UniqueName: \"kubernetes.io/projected/bc4c2749-7073-4bb8-8c87-736187565b08-kube-api-access-cwcjk\") pod \"designate-operator-controller-manager-9f958b845-sdkxs\" (UID: \"bc4c2749-7073-4bb8-8c87-736187565b08\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tz8\" (UniqueName: \"kubernetes.io/projected/56c38bff-8549-485e-a91f-1d89d801a8ee-kube-api-access-z6tz8\") pod \"heat-operator-controller-manager-594c8c9d5d-lttm9\" (UID: \"56c38bff-8549-485e-a91f-1d89d801a8ee\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbr4\" (UniqueName: \"kubernetes.io/projected/66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e-kube-api-access-bjbr4\") pod \"barbican-operator-controller-manager-7ddb5c749-j6fwd\" (UID: \"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561452 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqspm\" (UniqueName: \"kubernetes.io/projected/05001c4b-c8f0-46ea-bf02-d7537d8a373b-kube-api-access-zqspm\") pod \"horizon-operator-controller-manager-77d5c5b54f-nqnfh\" (UID: \"05001c4b-c8f0-46ea-bf02-d7537d8a373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561496 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngmh\" (UniqueName: \"kubernetes.io/projected/b924ea4f-71c9-4f42-aa0a-a4945ea589e3-kube-api-access-bngmh\") pod \"cinder-operator-controller-manager-9b68f5989-nh8zr\" (UID: \"b924ea4f-71c9-4f42-aa0a-a4945ea589e3\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.561550 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9hl4\" (UniqueName: \"kubernetes.io/projected/3c1e8b4d-a47d-4a6e-be63-bfc41d04d964-kube-api-access-r9hl4\") pod \"glance-operator-controller-manager-c6994669c-gffs4\" (UID: \"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.564552 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.575193 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.586947 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.587892 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.595422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngmh\" (UniqueName: \"kubernetes.io/projected/b924ea4f-71c9-4f42-aa0a-a4945ea589e3-kube-api-access-bngmh\") pod \"cinder-operator-controller-manager-9b68f5989-nh8zr\" (UID: \"b924ea4f-71c9-4f42-aa0a-a4945ea589e3\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.606362 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ltdfb" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.606548 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.617615 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbr4\" (UniqueName: \"kubernetes.io/projected/66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e-kube-api-access-bjbr4\") pod \"barbican-operator-controller-manager-7ddb5c749-j6fwd\" (UID: \"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.639134 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9hl4\" (UniqueName: \"kubernetes.io/projected/3c1e8b4d-a47d-4a6e-be63-bfc41d04d964-kube-api-access-r9hl4\") pod \"glance-operator-controller-manager-c6994669c-gffs4\" (UID: \"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.652827 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.653380 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.656442 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.656729 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.660197 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qwhpd" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.666087 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqspm\" (UniqueName: \"kubernetes.io/projected/05001c4b-c8f0-46ea-bf02-d7537d8a373b-kube-api-access-zqspm\") pod \"horizon-operator-controller-manager-77d5c5b54f-nqnfh\" (UID: \"05001c4b-c8f0-46ea-bf02-d7537d8a373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.666425 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcjk\" (UniqueName: \"kubernetes.io/projected/bc4c2749-7073-4bb8-8c87-736187565b08-kube-api-access-cwcjk\") pod \"designate-operator-controller-manager-9f958b845-sdkxs\" (UID: \"bc4c2749-7073-4bb8-8c87-736187565b08\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.666450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tz8\" (UniqueName: \"kubernetes.io/projected/56c38bff-8549-485e-a91f-1d89d801a8ee-kube-api-access-z6tz8\") pod \"heat-operator-controller-manager-594c8c9d5d-lttm9\" (UID: \"56c38bff-8549-485e-a91f-1d89d801a8ee\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.682754 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.686207 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.696197 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.708370 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vlt4w" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.708551 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.709961 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.724199 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.735959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcjk\" (UniqueName: \"kubernetes.io/projected/bc4c2749-7073-4bb8-8c87-736187565b08-kube-api-access-cwcjk\") pod \"designate-operator-controller-manager-9f958b845-sdkxs\" (UID: \"bc4c2749-7073-4bb8-8c87-736187565b08\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.737904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tz8\" (UniqueName: \"kubernetes.io/projected/56c38bff-8549-485e-a91f-1d89d801a8ee-kube-api-access-z6tz8\") pod \"heat-operator-controller-manager-594c8c9d5d-lttm9\" (UID: \"56c38bff-8549-485e-a91f-1d89d801a8ee\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.738539 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqspm\" (UniqueName: \"kubernetes.io/projected/05001c4b-c8f0-46ea-bf02-d7537d8a373b-kube-api-access-zqspm\") pod \"horizon-operator-controller-manager-77d5c5b54f-nqnfh\" (UID: \"05001c4b-c8f0-46ea-bf02-d7537d8a373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.755179 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.756068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768153 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/f3f5f576-48b8-4175-8d70-d8de7e41a63a-kube-api-access-wqlvh\") pod \"ironic-operator-controller-manager-78757b4889-khcxt\" (UID: \"f3f5f576-48b8-4175-8d70-d8de7e41a63a\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2r9\" (UniqueName: \"kubernetes.io/projected/cea39ffd-421f-4b74-9f26-065f49e00786-kube-api-access-7v2r9\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768257 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.768556 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tvc2d" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.791348 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.792262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.796127 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6dxsm" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.817628 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.827818 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.833839 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/f3f5f576-48b8-4175-8d70-d8de7e41a63a-kube-api-access-wqlvh\") pod \"ironic-operator-controller-manager-78757b4889-khcxt\" (UID: \"f3f5f576-48b8-4175-8d70-d8de7e41a63a\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2r9\" (UniqueName: \"kubernetes.io/projected/cea39ffd-421f-4b74-9f26-065f49e00786-kube-api-access-7v2r9\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869862 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64d49\" (UniqueName: \"kubernetes.io/projected/7d33c2a4-c369-4a5f-9592-289c162f095c-kube-api-access-64d49\") pod \"keystone-operator-controller-manager-767fdc4f47-qwcvn\" (UID: \"7d33c2a4-c369-4a5f-9592-289c162f095c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869908 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.869934 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkxds\" (UniqueName: \"kubernetes.io/projected/a5d9aa95-7d14-4a6e-af38-dddad85007f4-kube-api-access-fkxds\") pod \"manila-operator-controller-manager-864f6b75bf-x6xrb\" (UID: \"a5d9aa95-7d14-4a6e-af38-dddad85007f4\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.870810 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:17 crc kubenswrapper[4902]: E0121 14:49:17.870862 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:18.37084608 +0000 UTC m=+920.447679109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.871010 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.871834 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.879586 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.889375 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8f6dv" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.896568 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-nql9r"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.897553 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.908745 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.918354 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tndh9" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.957078 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-nql9r"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.966193 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.966919 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.967690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlvh\" (UniqueName: \"kubernetes.io/projected/f3f5f576-48b8-4175-8d70-d8de7e41a63a-kube-api-access-wqlvh\") pod \"ironic-operator-controller-manager-78757b4889-khcxt\" (UID: \"f3f5f576-48b8-4175-8d70-d8de7e41a63a\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.972907 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.973960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6dh\" (UniqueName: \"kubernetes.io/projected/0b55bf9c-cc65-446c-849e-035fb1bba4c4-kube-api-access-nf6dh\") pod \"neutron-operator-controller-manager-cb4666565-8vfnj\" (UID: \"0b55bf9c-cc65-446c-849e-035fb1bba4c4\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.974057 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkxds\" (UniqueName: \"kubernetes.io/projected/a5d9aa95-7d14-4a6e-af38-dddad85007f4-kube-api-access-fkxds\") pod \"manila-operator-controller-manager-864f6b75bf-x6xrb\" (UID: \"a5d9aa95-7d14-4a6e-af38-dddad85007f4\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.974168 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64d49\" (UniqueName: \"kubernetes.io/projected/7d33c2a4-c369-4a5f-9592-289c162f095c-kube-api-access-64d49\") pod \"keystone-operator-controller-manager-767fdc4f47-qwcvn\" (UID: \"7d33c2a4-c369-4a5f-9592-289c162f095c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.974202 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtrq\" (UniqueName: \"kubernetes.io/projected/01091192-af46-486f-8890-787505f3b41c-kube-api-access-fhtrq\") pod \"mariadb-operator-controller-manager-c87fff755-xrlqr\" (UID: \"01091192-af46-486f-8890-787505f3b41c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.976081 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.976846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.980711 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd"] Jan 21 14:49:17 crc kubenswrapper[4902]: I0121 14:49:17.981656 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.001413 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.023233 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.030924 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.053579 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.054973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.065029 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.067275 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.074993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtxx\" (UniqueName: \"kubernetes.io/projected/3912b1da-b132-48da-9b67-1f4aeb2203c4-kube-api-access-9wtxx\") pod \"ovn-operator-controller-manager-55db956ddc-lljfd\" (UID: \"3912b1da-b132-48da-9b67-1f4aeb2203c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.109916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgj2z\" (UniqueName: \"kubernetes.io/projected/b01862fd-dfad-4a73-ac90-5ef7823c06ea-kube-api-access-tgj2z\") pod \"nova-operator-controller-manager-65849867d6-nql9r\" (UID: \"b01862fd-dfad-4a73-ac90-5ef7823c06ea\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.109998 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110059 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtrq\" (UniqueName: \"kubernetes.io/projected/01091192-af46-486f-8890-787505f3b41c-kube-api-access-fhtrq\") pod \"mariadb-operator-controller-manager-c87fff755-xrlqr\" (UID: \"01091192-af46-486f-8890-787505f3b41c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110092 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npq9n\" (UniqueName: \"kubernetes.io/projected/bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90-kube-api-access-npq9n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-c2nb6\" (UID: \"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6dh\" (UniqueName: \"kubernetes.io/projected/0b55bf9c-cc65-446c-849e-035fb1bba4c4-kube-api-access-nf6dh\") pod \"neutron-operator-controller-manager-cb4666565-8vfnj\" (UID: \"0b55bf9c-cc65-446c-849e-035fb1bba4c4\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.110248 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57j4\" (UniqueName: \"kubernetes.io/projected/14dc1630-021a-4b05-8ac4-d99368b51726-kube-api-access-l57j4\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.113944 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.120333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.175655 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gw9vr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.176273 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xqxwr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.195369 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b7z56" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.196013 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tx94m" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.196352 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7mnnf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.304124 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.306596 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgj2z\" (UniqueName: \"kubernetes.io/projected/b01862fd-dfad-4a73-ac90-5ef7823c06ea-kube-api-access-tgj2z\") pod \"nova-operator-controller-manager-65849867d6-nql9r\" (UID: \"b01862fd-dfad-4a73-ac90-5ef7823c06ea\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.306938 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.306985 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npq9n\" (UniqueName: \"kubernetes.io/projected/bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90-kube-api-access-npq9n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-c2nb6\" (UID: \"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307114 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57j4\" (UniqueName: \"kubernetes.io/projected/14dc1630-021a-4b05-8ac4-d99368b51726-kube-api-access-l57j4\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307175 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkv5c\" (UniqueName: \"kubernetes.io/projected/1e685238-529c-4964-af9d-8abed4dfcfae-kube-api-access-xkv5c\") pod \"swift-operator-controller-manager-85dd56d4cc-wqmq2\" (UID: \"1e685238-529c-4964-af9d-8abed4dfcfae\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtxx\" (UniqueName: \"kubernetes.io/projected/3912b1da-b132-48da-9b67-1f4aeb2203c4-kube-api-access-9wtxx\") pod \"ovn-operator-controller-manager-55db956ddc-lljfd\" (UID: \"3912b1da-b132-48da-9b67-1f4aeb2203c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.307256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4x6s\" (UniqueName: \"kubernetes.io/projected/c5d64dc8-80f6-4076-9068-11ec25d524b5-kube-api-access-l4x6s\") pod \"placement-operator-controller-manager-686df47fcb-pmvgc\" (UID: \"c5d64dc8-80f6-4076-9068-11ec25d524b5\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.315845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2r9\" (UniqueName: \"kubernetes.io/projected/cea39ffd-421f-4b74-9f26-065f49e00786-kube-api-access-7v2r9\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.316282 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.316355 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:18.816337133 +0000 UTC m=+920.893170162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.342081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtrq\" (UniqueName: \"kubernetes.io/projected/01091192-af46-486f-8890-787505f3b41c-kube-api-access-fhtrq\") pod \"mariadb-operator-controller-manager-c87fff755-xrlqr\" (UID: \"01091192-af46-486f-8890-787505f3b41c\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.343134 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6dh\" (UniqueName: \"kubernetes.io/projected/0b55bf9c-cc65-446c-849e-035fb1bba4c4-kube-api-access-nf6dh\") pod \"neutron-operator-controller-manager-cb4666565-8vfnj\" (UID: \"0b55bf9c-cc65-446c-849e-035fb1bba4c4\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.343789 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkxds\" (UniqueName: \"kubernetes.io/projected/a5d9aa95-7d14-4a6e-af38-dddad85007f4-kube-api-access-fkxds\") pod \"manila-operator-controller-manager-864f6b75bf-x6xrb\" (UID: \"a5d9aa95-7d14-4a6e-af38-dddad85007f4\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.359528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtxx\" (UniqueName: \"kubernetes.io/projected/3912b1da-b132-48da-9b67-1f4aeb2203c4-kube-api-access-9wtxx\") pod \"ovn-operator-controller-manager-55db956ddc-lljfd\" (UID: \"3912b1da-b132-48da-9b67-1f4aeb2203c4\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.360133 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npq9n\" (UniqueName: \"kubernetes.io/projected/bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90-kube-api-access-npq9n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-c2nb6\" (UID: \"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.360955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgj2z\" (UniqueName: \"kubernetes.io/projected/b01862fd-dfad-4a73-ac90-5ef7823c06ea-kube-api-access-tgj2z\") pod \"nova-operator-controller-manager-65849867d6-nql9r\" (UID: \"b01862fd-dfad-4a73-ac90-5ef7823c06ea\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.366203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64d49\" (UniqueName: \"kubernetes.io/projected/7d33c2a4-c369-4a5f-9592-289c162f095c-kube-api-access-64d49\") pod \"keystone-operator-controller-manager-767fdc4f47-qwcvn\" (UID: \"7d33c2a4-c369-4a5f-9592-289c162f095c\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.373155 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57j4\" (UniqueName: \"kubernetes.io/projected/14dc1630-021a-4b05-8ac4-d99368b51726-kube-api-access-l57j4\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.407107 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408652 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkv5c\" (UniqueName: \"kubernetes.io/projected/1e685238-529c-4964-af9d-8abed4dfcfae-kube-api-access-xkv5c\") pod \"swift-operator-controller-manager-85dd56d4cc-wqmq2\" (UID: \"1e685238-529c-4964-af9d-8abed4dfcfae\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4x6s\" (UniqueName: \"kubernetes.io/projected/c5d64dc8-80f6-4076-9068-11ec25d524b5-kube-api-access-l4x6s\") pod \"placement-operator-controller-manager-686df47fcb-pmvgc\" (UID: \"c5d64dc8-80f6-4076-9068-11ec25d524b5\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.408993 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.409020 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf"] Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.409067 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.409125 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.409100355 +0000 UTC m=+921.485933384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.409460 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.409662 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.414537 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.419401 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4nfmt" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.419509 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lr9lm" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.440483 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.445250 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.448242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkv5c\" (UniqueName: \"kubernetes.io/projected/1e685238-529c-4964-af9d-8abed4dfcfae-kube-api-access-xkv5c\") pod \"swift-operator-controller-manager-85dd56d4cc-wqmq2\" (UID: \"1e685238-529c-4964-af9d-8abed4dfcfae\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.460169 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.460185 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.468139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4x6s\" (UniqueName: \"kubernetes.io/projected/c5d64dc8-80f6-4076-9068-11ec25d524b5-kube-api-access-l4x6s\") pod \"placement-operator-controller-manager-686df47fcb-pmvgc\" (UID: \"c5d64dc8-80f6-4076-9068-11ec25d524b5\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.486722 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.487882 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.495840 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jnz88" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.504462 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.510138 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrr2\" (UniqueName: \"kubernetes.io/projected/624ad6d5-5647-43c8-8e62-751e4c5989b3-kube-api-access-7rrr2\") pod \"test-operator-controller-manager-7cd8bc9dbb-gn5kf\" (UID: \"624ad6d5-5647-43c8-8e62-751e4c5989b3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.510217 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frm2w\" (UniqueName: \"kubernetes.io/projected/2ad74206-4131-4395-8392-9697c2c164eb-kube-api-access-frm2w\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v7bj9\" (UID: \"2ad74206-4131-4395-8392-9697c2c164eb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.511512 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.521818 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.533884 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.540961 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.541747 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.548033 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.548344 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.548987 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zbxwz" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.557190 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.586446 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.608144 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.609612 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.610207 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.610902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrr2\" (UniqueName: \"kubernetes.io/projected/624ad6d5-5647-43c8-8e62-751e4c5989b3-kube-api-access-7rrr2\") pod \"test-operator-controller-manager-7cd8bc9dbb-gn5kf\" (UID: \"624ad6d5-5647-43c8-8e62-751e4c5989b3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.610963 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frm2w\" (UniqueName: \"kubernetes.io/projected/2ad74206-4131-4395-8392-9697c2c164eb-kube-api-access-frm2w\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v7bj9\" (UID: \"2ad74206-4131-4395-8392-9697c2c164eb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.611005 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrw4x\" (UniqueName: \"kubernetes.io/projected/6783daa1-082d-4ab7-be65-dc2fb211be6c-kube-api-access-hrw4x\") pod \"watcher-operator-controller-manager-64cd966744-s8g8n\" (UID: \"6783daa1-082d-4ab7-be65-dc2fb211be6c\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.624289 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs"] Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.630448 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kqtcx" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.648215 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrr2\" (UniqueName: \"kubernetes.io/projected/624ad6d5-5647-43c8-8e62-751e4c5989b3-kube-api-access-7rrr2\") pod \"test-operator-controller-manager-7cd8bc9dbb-gn5kf\" (UID: \"624ad6d5-5647-43c8-8e62-751e4c5989b3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.655495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frm2w\" (UniqueName: \"kubernetes.io/projected/2ad74206-4131-4395-8392-9697c2c164eb-kube-api-access-frm2w\") pod \"telemetry-operator-controller-manager-5f8f495fcf-v7bj9\" (UID: \"2ad74206-4131-4395-8392-9697c2c164eb\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.662065 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.713827 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.713880 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrw4x\" (UniqueName: \"kubernetes.io/projected/6783daa1-082d-4ab7-be65-dc2fb211be6c-kube-api-access-hrw4x\") pod \"watcher-operator-controller-manager-64cd966744-s8g8n\" (UID: \"6783daa1-082d-4ab7-be65-dc2fb211be6c\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.713914 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szjnn\" (UniqueName: \"kubernetes.io/projected/1ffd452b-d331-4c80-a6f6-0b1b21d5fd84-kube-api-access-szjnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s7vgs\" (UID: \"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.714030 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.714117 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6p2\" (UniqueName: \"kubernetes.io/projected/77e35131-84f1-4df7-b6de-ceda247df931-kube-api-access-xg6p2\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.726958 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.754945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrw4x\" (UniqueName: \"kubernetes.io/projected/6783daa1-082d-4ab7-be65-dc2fb211be6c-kube-api-access-hrw4x\") pod \"watcher-operator-controller-manager-64cd966744-s8g8n\" (UID: \"6783daa1-082d-4ab7-be65-dc2fb211be6c\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.777865 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820439 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820480 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6p2\" (UniqueName: \"kubernetes.io/projected/77e35131-84f1-4df7-b6de-ceda247df931-kube-api-access-xg6p2\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.820649 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szjnn\" (UniqueName: \"kubernetes.io/projected/1ffd452b-d331-4c80-a6f6-0b1b21d5fd84-kube-api-access-szjnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s7vgs\" (UID: \"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.821590 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.821652 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.821635211 +0000 UTC m=+921.898468240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.821966 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.822006 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.321996491 +0000 UTC m=+921.398829520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.822921 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: E0121 14:49:18.822961 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:19.322950357 +0000 UTC m=+921.399783386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.862466 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6p2\" (UniqueName: \"kubernetes.io/projected/77e35131-84f1-4df7-b6de-ceda247df931-kube-api-access-xg6p2\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.867465 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szjnn\" (UniqueName: \"kubernetes.io/projected/1ffd452b-d331-4c80-a6f6-0b1b21d5fd84-kube-api-access-szjnn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s7vgs\" (UID: \"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.883128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.954004 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" Jan 21 14:49:18 crc kubenswrapper[4902]: I0121 14:49:18.985837 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.229725 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.236052 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" event={"ID":"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e","Type":"ContainerStarted","Data":"d09f803cdce587e4b8d545fcffcca3989a4dd1ca010981fe40dacf3d29241270"} Jan 21 14:49:19 crc kubenswrapper[4902]: W0121 14:49:19.252219 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c38bff_8549_485e_a91f_1d89d801a8ee.slice/crio-531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4 WatchSource:0}: Error finding container 531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4: Status 404 returned error can't find the container with id 531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4 Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.274517 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gffs4"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.283979 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.335632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.335753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.335915 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.335985 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:20.335969128 +0000 UTC m=+922.412802157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.337644 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.337709 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:20.337695345 +0000 UTC m=+922.414528374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.437816 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.438199 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.438287 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:21.438261681 +0000 UTC m=+923.515094770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.844966 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.845248 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: E0121 14:49:19.845403 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:21.845385769 +0000 UTC m=+923.922218798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:19 crc kubenswrapper[4902]: W0121 14:49:19.929524 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5d64dc8_80f6_4076_9068_11ec25d524b5.slice/crio-2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7 WatchSource:0}: Error finding container 2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7: Status 404 returned error can't find the container with id 2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7 Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.937380 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc"] Jan 21 14:49:19 crc kubenswrapper[4902]: I0121 14:49:19.964404 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.001616 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc7bedc3_7b23_4f5c_bfbb_7b05694e6b90.slice/crio-72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916 WatchSource:0}: Error finding container 72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916: Status 404 returned error can't find the container with id 72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916 Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.005273 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.010126 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.014646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.029622 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.182330 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-nql9r"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.188500 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb01862fd_dfad_4a73_ac90_5ef7823c06ea.slice/crio-476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b WatchSource:0}: Error finding container 476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b: Status 404 returned error can't find the container with id 476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.245831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" event={"ID":"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964","Type":"ContainerStarted","Data":"0e495f7c375b9e06811a8bf274df4e72da6ce9145a34ea2f11e5fa3872413b62"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.247228 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" event={"ID":"f3f5f576-48b8-4175-8d70-d8de7e41a63a","Type":"ContainerStarted","Data":"0b7aefa08581ab163ca0872316e627b351164c9458c789799eb56a6df1feeb5a"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.248678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" event={"ID":"c5d64dc8-80f6-4076-9068-11ec25d524b5","Type":"ContainerStarted","Data":"2011aad83d7ab0766aaa35b4b5514bcf9be82e40a137c848d439b1d1fdce6cf7"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.250175 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" event={"ID":"56c38bff-8549-485e-a91f-1d89d801a8ee","Type":"ContainerStarted","Data":"531c8c4a5d7fd5896c1278df57cafe46baf2d8bc7312a9f3635c06ba9045afe4"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.253294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" event={"ID":"b924ea4f-71c9-4f42-aa0a-a4945ea589e3","Type":"ContainerStarted","Data":"a1de2aa57c8376be1b8c65528efc6e3ffa2f9043ebd319277086a0179cc9b46d"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.254781 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" event={"ID":"05001c4b-c8f0-46ea-bf02-d7537d8a373b","Type":"ContainerStarted","Data":"84680913e2093f5caf3b49d2183431d68bda2b464e3351aba53c472b32749618"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.256424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" event={"ID":"bc4c2749-7073-4bb8-8c87-736187565b08","Type":"ContainerStarted","Data":"cd5bda888051a4fc6e23c69c2e0369b531199ebdd8493e6af8d4876a3710a4e5"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.257474 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" event={"ID":"a5d9aa95-7d14-4a6e-af38-dddad85007f4","Type":"ContainerStarted","Data":"efebdb73d651dcecfd8bcf699e5a49ce41002d6c95d657d3c85221347542ff8c"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.259709 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" event={"ID":"b01862fd-dfad-4a73-ac90-5ef7823c06ea","Type":"ContainerStarted","Data":"476005716aeaae86727250c0adb0672168bda9dc84fd95f11ca6401a4fa7302b"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.261129 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" event={"ID":"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90","Type":"ContainerStarted","Data":"72afda5a413e7ac9c92b85720c39a8b517192179bd090d71f85c8dd8a9c63916"} Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.340439 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.350905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.351008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.351687 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.351748 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:22.351729565 +0000 UTC m=+924.428562594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.352312 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.352352 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:22.352341662 +0000 UTC m=+924.429174691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.373667 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.391718 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.392226 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6783daa1_082d_4ab7_be65_dc2fb211be6c.slice/crio-a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822 WatchSource:0}: Error finding container a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822: Status 404 returned error can't find the container with id a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822 Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.394992 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e685238_529c_4964_af9d_8abed4dfcfae.slice/crio-5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8 WatchSource:0}: Error finding container 5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8: Status 404 returned error can't find the container with id 5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8 Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.400973 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr"] Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.412304 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.413433 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b55bf9c_cc65_446c_849e_035fb1bba4c4.slice/crio-e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a WatchSource:0}: Error finding container e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a: Status 404 returned error can't find the container with id e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.417637 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.425285 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad74206_4131_4395_8392_9697c2c164eb.slice/crio-c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68 WatchSource:0}: Error finding container c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68: Status 404 returned error can't find the container with id c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68 Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.426406 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01091192_af46_486f_8890_787505f3b41c.slice/crio-b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a WatchSource:0}: Error finding container b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a: Status 404 returned error can't find the container with id b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.426476 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn"] Jan 21 14:49:20 crc kubenswrapper[4902]: W0121 14:49:20.427410 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod624ad6d5_5647_43c8_8e62_751e4c5989b3.slice/crio-03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9 WatchSource:0}: Error finding container 03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9: Status 404 returned error can't find the container with id 03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9 Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.432824 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs"] Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.435796 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szjnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-s7vgs_openstack-operators(1ffd452b-d331-4c80-a6f6-0b1b21d5fd84): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.436034 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rrr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-gn5kf_openstack-operators(624ad6d5-5647-43c8-8e62-751e4c5989b3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.436029 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frm2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-v7bj9_openstack-operators(2ad74206-4131-4395-8392-9697c2c164eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.436266 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64d49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-qwcvn_openstack-operators(7d33c2a4-c369-4a5f-9592-289c162f095c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437538 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437568 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437583 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:20 crc kubenswrapper[4902]: E0121 14:49:20.437586 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:20 crc kubenswrapper[4902]: I0121 14:49:20.439873 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf"] Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.286805 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" event={"ID":"7d33c2a4-c369-4a5f-9592-289c162f095c","Type":"ContainerStarted","Data":"bc8c2ab20c735a10d99185659594096ce7c73887f21c29118a5143ec7300e0c3"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.297479 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.322767 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" event={"ID":"3912b1da-b132-48da-9b67-1f4aeb2203c4","Type":"ContainerStarted","Data":"ec059c628d28ef3c5fab56f7434e6532dcf70f387cfb37b54135ca6090315cc2"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.325564 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" event={"ID":"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84","Type":"ContainerStarted","Data":"8cce3972789784b00226c241d9c6385d6b6cffa30263d10db1f45ebc58d8cca6"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.327618 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.330772 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" event={"ID":"2ad74206-4131-4395-8392-9697c2c164eb","Type":"ContainerStarted","Data":"c543ef60c825f8e1d74bca6b41e90de5a83a9c82b3207b3c32594e1772df6f68"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.332679 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.338133 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" event={"ID":"1e685238-529c-4964-af9d-8abed4dfcfae","Type":"ContainerStarted","Data":"5bb277a81a91f081abbf44e4a00f67bbab3d5088712b8fe699084d9d06ebd8d8"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.381882 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" event={"ID":"624ad6d5-5647-43c8-8e62-751e4c5989b3","Type":"ContainerStarted","Data":"03af43db7907b9f96b39c44e3212f0f413456154db6e1fb754b18514e0179dc9"} Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.385015 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.385892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" event={"ID":"6783daa1-082d-4ab7-be65-dc2fb211be6c","Type":"ContainerStarted","Data":"a84babe89ef5ca22a0be2933d44543b6956cf0dcdeb121f886eb367a50389822"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.395075 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" event={"ID":"0b55bf9c-cc65-446c-849e-035fb1bba4c4","Type":"ContainerStarted","Data":"e10e14d87e0001550e69b4cb3bcf954aab9c87da3a32e92ce3caaf365d0eed6a"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.401158 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" event={"ID":"01091192-af46-486f-8890-787505f3b41c","Type":"ContainerStarted","Data":"b5b6b6fa7cf8fb796626afdcc27f26238e0e181c53c4f6ff361752d7b8d7148a"} Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.487877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.488127 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.488238 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:25.488185552 +0000 UTC m=+927.565018581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:21 crc kubenswrapper[4902]: I0121 14:49:21.933452 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.933669 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:21 crc kubenswrapper[4902]: E0121 14:49:21.933732 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:25.933715846 +0000 UTC m=+928.010548875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: I0121 14:49:22.355354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:22 crc kubenswrapper[4902]: I0121 14:49:22.355613 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355626 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355766 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:26.355752774 +0000 UTC m=+928.432585803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355698 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.355891 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:26.355858757 +0000 UTC m=+928.432691866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439388 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439642 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439673 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:49:22 crc kubenswrapper[4902]: E0121 14:49:22.439701 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:25 crc kubenswrapper[4902]: I0121 14:49:25.554271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.554431 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.554658 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:33.554642165 +0000 UTC m=+935.631475194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:25 crc kubenswrapper[4902]: I0121 14:49:25.959963 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.960184 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:25 crc kubenswrapper[4902]: E0121 14:49:25.960287 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:33.960265561 +0000 UTC m=+936.037098610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: I0121 14:49:26.367005 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:26 crc kubenswrapper[4902]: I0121 14:49:26.367462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.367635 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.367696 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:34.367676937 +0000 UTC m=+936.444509966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.369586 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:26 crc kubenswrapper[4902]: E0121 14:49:26.369631 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:34.3696174 +0000 UTC m=+936.446450439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:33 crc kubenswrapper[4902]: I0121 14:49:33.633205 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:33 crc kubenswrapper[4902]: E0121 14:49:33.633311 4902 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:33 crc kubenswrapper[4902]: E0121 14:49:33.633833 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert podName:cea39ffd-421f-4b74-9f26-065f49e00786 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:49.633818655 +0000 UTC m=+951.710651684 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert") pod "infra-operator-controller-manager-77c48c7859-46xm9" (UID: "cea39ffd-421f-4b74-9f26-065f49e00786") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: I0121 14:49:34.038659 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.038858 4902 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.038913 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert podName:14dc1630-021a-4b05-8ac4-d99368b51726 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:50.038896046 +0000 UTC m=+952.115729075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" (UID: "14dc1630-021a-4b05-8ac4-d99368b51726") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: I0121 14:49:34.444381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:34 crc kubenswrapper[4902]: I0121 14:49:34.444540 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.444843 4902 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.444958 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:50.444937673 +0000 UTC m=+952.521770702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "metrics-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.445852 4902 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.445898 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs podName:77e35131-84f1-4df7-b6de-ceda247df931 nodeName:}" failed. No retries permitted until 2026-01-21 14:49:50.445889019 +0000 UTC m=+952.522722048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-hr66g" (UID: "77e35131-84f1-4df7-b6de-ceda247df931") : secret "webhook-server-cert" not found Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.643933 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.644129 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkv5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-wqmq2_openstack-operators(1e685238-529c-4964-af9d-8abed4dfcfae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:34 crc kubenswrapper[4902]: E0121 14:49:34.645305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" podUID="1e685238-529c-4964-af9d-8abed4dfcfae" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.526401 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.526821 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z6tz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-lttm9_openstack-operators(56c38bff-8549-485e-a91f-1d89d801a8ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.528115 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" podUID="56c38bff-8549-485e-a91f-1d89d801a8ee" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.594915 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" podUID="1e685238-529c-4964-af9d-8abed4dfcfae" Jan 21 14:49:35 crc kubenswrapper[4902]: E0121 14:49:35.595614 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" podUID="56c38bff-8549-485e-a91f-1d89d801a8ee" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.129576 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.129804 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkxds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-x6xrb_openstack-operators(a5d9aa95-7d14-4a6e-af38-dddad85007f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.131723 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" podUID="a5d9aa95-7d14-4a6e-af38-dddad85007f4" Jan 21 14:49:36 crc kubenswrapper[4902]: E0121 14:49:36.602413 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" podUID="a5d9aa95-7d14-4a6e-af38-dddad85007f4" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.442177 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.442467 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wtxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-lljfd_openstack-operators(3912b1da-b132-48da-9b67-1f4aeb2203c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.444153 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" podUID="3912b1da-b132-48da-9b67-1f4aeb2203c4" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.628788 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" podUID="3912b1da-b132-48da-9b67-1f4aeb2203c4" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.812529 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.812751 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrw4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-s8g8n_openstack-operators(6783daa1-082d-4ab7-be65-dc2fb211be6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:39 crc kubenswrapper[4902]: E0121 14:49:39.813975 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" podUID="6783daa1-082d-4ab7-be65-dc2fb211be6c" Jan 21 14:49:40 crc kubenswrapper[4902]: E0121 14:49:40.633950 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" podUID="6783daa1-082d-4ab7-be65-dc2fb211be6c" Jan 21 14:49:49 crc kubenswrapper[4902]: I0121 14:49:49.633887 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:49 crc kubenswrapper[4902]: E0121 14:49:49.638487 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 14:49:49 crc kubenswrapper[4902]: E0121 14:49:49.638695 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rrr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-gn5kf_openstack-operators(624ad6d5-5647-43c8-8e62-751e4c5989b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:49 crc kubenswrapper[4902]: E0121 14:49:49.640085 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:49:49 crc kubenswrapper[4902]: I0121 14:49:49.642089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cea39ffd-421f-4b74-9f26-065f49e00786-cert\") pod \"infra-operator-controller-manager-77c48c7859-46xm9\" (UID: \"cea39ffd-421f-4b74-9f26-065f49e00786\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:49 crc kubenswrapper[4902]: I0121 14:49:49.775422 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.040650 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.045669 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14dc1630-021a-4b05-8ac4-d99368b51726-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986dhp6x8\" (UID: \"14dc1630-021a-4b05-8ac4-d99368b51726\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.261098 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.447825 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.447901 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.452063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.456798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/77e35131-84f1-4df7-b6de-ceda247df931-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-hr66g\" (UID: \"77e35131-84f1-4df7-b6de-ceda247df931\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:50 crc kubenswrapper[4902]: I0121 14:49:50.718459 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:49:51 crc kubenswrapper[4902]: E0121 14:49:51.652291 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 14:49:51 crc kubenswrapper[4902]: E0121 14:49:51.652455 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64d49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-qwcvn_openstack-operators(7d33c2a4-c369-4a5f-9592-289c162f095c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:51 crc kubenswrapper[4902]: E0121 14:49:51.653647 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.276598 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.276827 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqlvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-khcxt_openstack-operators(f3f5f576-48b8-4175-8d70-d8de7e41a63a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.278168 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" podUID="f3f5f576-48b8-4175-8d70-d8de7e41a63a" Jan 21 14:49:52 crc kubenswrapper[4902]: E0121 14:49:52.730649 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" podUID="f3f5f576-48b8-4175-8d70-d8de7e41a63a" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.355267 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.355486 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwcjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-sdkxs_openstack-operators(bc4c2749-7073-4bb8-8c87-736187565b08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.356718 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" podUID="bc4c2749-7073-4bb8-8c87-736187565b08" Jan 21 14:49:53 crc kubenswrapper[4902]: E0121 14:49:53.748581 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" podUID="bc4c2749-7073-4bb8-8c87-736187565b08" Jan 21 14:49:54 crc kubenswrapper[4902]: E0121 14:49:54.673507 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 14:49:54 crc kubenswrapper[4902]: E0121 14:49:54.673786 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frm2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-v7bj9_openstack-operators(2ad74206-4131-4395-8392-9697c2c164eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:54 crc kubenswrapper[4902]: E0121 14:49:54.675097 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.527997 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.529250 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgj2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-nql9r_openstack-operators(b01862fd-dfad-4a73-ac90-5ef7823c06ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.530638 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" podUID="b01862fd-dfad-4a73-ac90-5ef7823c06ea" Jan 21 14:49:57 crc kubenswrapper[4902]: E0121 14:49:57.775736 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" podUID="b01862fd-dfad-4a73-ac90-5ef7823c06ea" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.240839 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.241237 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nf6dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-8vfnj_openstack-operators(0b55bf9c-cc65-446c-849e-035fb1bba4c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.242666 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" podUID="0b55bf9c-cc65-446c-849e-035fb1bba4c4" Jan 21 14:49:58 crc kubenswrapper[4902]: E0121 14:49:58.785285 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" podUID="0b55bf9c-cc65-446c-849e-035fb1bba4c4" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.296134 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podUID="624ad6d5-5647-43c8-8e62-751e4c5989b3" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.674187 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.674370 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szjnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-s7vgs_openstack-operators(1ffd452b-d331-4c80-a6f6-0b1b21d5fd84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:00 crc kubenswrapper[4902]: E0121 14:50:00.675470 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.462772 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8"] Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.489863 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9"] Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.777378 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g"] Jan 21 14:50:01 crc kubenswrapper[4902]: W0121 14:50:01.779644 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77e35131_84f1_4df7_b6de_ceda247df931.slice/crio-bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e WatchSource:0}: Error finding container bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e: Status 404 returned error can't find the container with id bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.804406 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" event={"ID":"66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e","Type":"ContainerStarted","Data":"147994c9c4d6ffbae7dae34d134e990e3809785d6b92aa20f207ea96ab026ec1"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.805833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" event={"ID":"77e35131-84f1-4df7-b6de-ceda247df931","Type":"ContainerStarted","Data":"bd481c53ddb914b46e8d8252056b984f59876cf3f8595fc7a9c31d62dd2ae11e"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.807241 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" event={"ID":"3c1e8b4d-a47d-4a6e-be63-bfc41d04d964","Type":"ContainerStarted","Data":"5fc0538aaa09a715393ccb6d668c62fd974446e0a1ae4f3393e081b958848eb8"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.808250 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.809815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" event={"ID":"3912b1da-b132-48da-9b67-1f4aeb2203c4","Type":"ContainerStarted","Data":"b5bea44854eafba048f117851acabbb00cfd0449b127a0acb316bb7c2d3d3b50"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.812321 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.815553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" event={"ID":"cea39ffd-421f-4b74-9f26-065f49e00786","Type":"ContainerStarted","Data":"9ceb84be5ef5279bdade38a87108464391261dbe3b1954062f0c27d1232bb331"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.822562 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" event={"ID":"a5d9aa95-7d14-4a6e-af38-dddad85007f4","Type":"ContainerStarted","Data":"e5db9fdc618abbcd4a48560055de12494aedd908ef716a271a9ac8deea3b3978"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.826554 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" event={"ID":"bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90","Type":"ContainerStarted","Data":"07c9c538f77e19ab9fab3728e6e8eee968cf664672186bb870a319ad7426ac10"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.834947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" event={"ID":"b924ea4f-71c9-4f42-aa0a-a4945ea589e3","Type":"ContainerStarted","Data":"ff5fd66c8e144fbf43aa4a446473a2217594924ffba9e1780d0ccf8a3d03b153"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.857319 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" podStartSLOduration=6.731189034 podStartE2EDuration="44.857305327s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.321629873 +0000 UTC m=+921.398462902" lastFinishedPulling="2026-01-21 14:49:57.447746166 +0000 UTC m=+959.524579195" observedRunningTime="2026-01-21 14:50:01.852996719 +0000 UTC m=+963.929829748" watchObservedRunningTime="2026-01-21 14:50:01.857305327 +0000 UTC m=+963.934138356" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.862893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" event={"ID":"6783daa1-082d-4ab7-be65-dc2fb211be6c","Type":"ContainerStarted","Data":"0afdc19f03cfa72c43b19f1448fa00d0b965d21013c4b314d0e2db064467fe86"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.863199 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.869935 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" event={"ID":"14dc1630-021a-4b05-8ac4-d99368b51726","Type":"ContainerStarted","Data":"f3458e06a9e00c949e22ddd57bdc7a78bd00d6197a6943329b0c946cf6bbecd9"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.926962 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" event={"ID":"01091192-af46-486f-8890-787505f3b41c","Type":"ContainerStarted","Data":"96fabea12803f37ab2fdc0d110168312353a7c4b41d1b31c343457d52625df31"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.927252 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.972350 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" podStartSLOduration=4.683792293 podStartE2EDuration="44.972331501s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.382737618 +0000 UTC m=+922.459570647" lastFinishedPulling="2026-01-21 14:50:00.671276826 +0000 UTC m=+962.748109855" observedRunningTime="2026-01-21 14:50:01.905139963 +0000 UTC m=+963.981972992" watchObservedRunningTime="2026-01-21 14:50:01.972331501 +0000 UTC m=+964.049164530" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.973109 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" event={"ID":"56c38bff-8549-485e-a91f-1d89d801a8ee","Type":"ContainerStarted","Data":"bd91f5934af96035c1e6e6b50659da8a6322137cdc73010f3005a4cc270cf229"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.973836 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.973830 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" podStartSLOduration=3.699938257 podStartE2EDuration="43.973822932s" podCreationTimestamp="2026-01-21 14:49:18 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.397428542 +0000 UTC m=+922.474261571" lastFinishedPulling="2026-01-21 14:50:00.671313217 +0000 UTC m=+962.748146246" observedRunningTime="2026-01-21 14:50:01.969129433 +0000 UTC m=+964.045962462" watchObservedRunningTime="2026-01-21 14:50:01.973822932 +0000 UTC m=+964.050655961" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.981625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" event={"ID":"1e685238-529c-4964-af9d-8abed4dfcfae","Type":"ContainerStarted","Data":"34c313c8ce31ed2925f2d5643ec662a5359fcb236385f602a710ace20a3739ff"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.982405 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.983806 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" event={"ID":"05001c4b-c8f0-46ea-bf02-d7537d8a373b","Type":"ContainerStarted","Data":"288d3b340d3fca254c6e56948869684d71b5385a70e9765fc390cc8727e12f8b"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.984411 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.985391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" event={"ID":"c5d64dc8-80f6-4076-9068-11ec25d524b5","Type":"ContainerStarted","Data":"3769ea1de172394498b8477afddcb9ca9b1619b07fee03ea91472fadf2b2926d"} Jan 21 14:50:01 crc kubenswrapper[4902]: I0121 14:50:01.985806 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.055764 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" podStartSLOduration=5.580785253 podStartE2EDuration="45.055738295s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.4304698 +0000 UTC m=+922.507302839" lastFinishedPulling="2026-01-21 14:49:59.905422832 +0000 UTC m=+961.982255881" observedRunningTime="2026-01-21 14:50:02.055265002 +0000 UTC m=+964.132098031" watchObservedRunningTime="2026-01-21 14:50:02.055738295 +0000 UTC m=+964.132571324" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.081657 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" podStartSLOduration=5.21599071 podStartE2EDuration="45.081642868s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.01889154 +0000 UTC m=+922.095724569" lastFinishedPulling="2026-01-21 14:49:59.884543688 +0000 UTC m=+961.961376727" observedRunningTime="2026-01-21 14:50:02.078739478 +0000 UTC m=+964.155572507" watchObservedRunningTime="2026-01-21 14:50:02.081642868 +0000 UTC m=+964.158475897" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.130111 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" podStartSLOduration=4.864183424 podStartE2EDuration="45.13009556s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.398629555 +0000 UTC m=+922.475462584" lastFinishedPulling="2026-01-21 14:50:00.664541691 +0000 UTC m=+962.741374720" observedRunningTime="2026-01-21 14:50:02.104599409 +0000 UTC m=+964.181432438" watchObservedRunningTime="2026-01-21 14:50:02.13009556 +0000 UTC m=+964.206928589" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.133731 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" podStartSLOduration=3.71727009 podStartE2EDuration="45.13372037s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.254819856 +0000 UTC m=+921.331652885" lastFinishedPulling="2026-01-21 14:50:00.671270136 +0000 UTC m=+962.748103165" observedRunningTime="2026-01-21 14:50:02.129381081 +0000 UTC m=+964.206214110" watchObservedRunningTime="2026-01-21 14:50:02.13372037 +0000 UTC m=+964.210553399" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.162085 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" podStartSLOduration=5.211417445 podStartE2EDuration="45.16206687s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.933986876 +0000 UTC m=+922.010819905" lastFinishedPulling="2026-01-21 14:49:59.884636311 +0000 UTC m=+961.961469330" observedRunningTime="2026-01-21 14:50:02.155936531 +0000 UTC m=+964.232769560" watchObservedRunningTime="2026-01-21 14:50:02.16206687 +0000 UTC m=+964.238899899" Jan 21 14:50:02 crc kubenswrapper[4902]: E0121 14:50:02.295733 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podUID="7d33c2a4-c369-4a5f-9592-289c162f095c" Jan 21 14:50:02 crc kubenswrapper[4902]: I0121 14:50:02.994980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" event={"ID":"77e35131-84f1-4df7-b6de-ceda247df931","Type":"ContainerStarted","Data":"a662be622993c039ca65f54334e428be03d3233f4a97317f521d770c4efa4478"} Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.096336 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" podStartSLOduration=7.054747872 podStartE2EDuration="46.096315774s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.023563725 +0000 UTC m=+921.100396754" lastFinishedPulling="2026-01-21 14:49:58.065131627 +0000 UTC m=+960.141964656" observedRunningTime="2026-01-21 14:50:03.091698667 +0000 UTC m=+965.168531696" watchObservedRunningTime="2026-01-21 14:50:03.096315774 +0000 UTC m=+965.173148803" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.108341 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" podStartSLOduration=5.405327608 podStartE2EDuration="46.108324175s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.961237645 +0000 UTC m=+922.038070674" lastFinishedPulling="2026-01-21 14:50:00.664234212 +0000 UTC m=+962.741067241" observedRunningTime="2026-01-21 14:50:03.105720613 +0000 UTC m=+965.182553662" watchObservedRunningTime="2026-01-21 14:50:03.108324175 +0000 UTC m=+965.185157204" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.143409 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" podStartSLOduration=5.60322565 podStartE2EDuration="46.143390939s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:19.3444222 +0000 UTC m=+921.421255229" lastFinishedPulling="2026-01-21 14:49:59.884587479 +0000 UTC m=+961.961420518" observedRunningTime="2026-01-21 14:50:03.142094203 +0000 UTC m=+965.218927242" watchObservedRunningTime="2026-01-21 14:50:03.143390939 +0000 UTC m=+965.220223978" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.168988 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" podStartSLOduration=8.118365296 podStartE2EDuration="46.168969883s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.015853097 +0000 UTC m=+922.092686126" lastFinishedPulling="2026-01-21 14:49:58.066457674 +0000 UTC m=+960.143290713" observedRunningTime="2026-01-21 14:50:03.164008786 +0000 UTC m=+965.240841815" watchObservedRunningTime="2026-01-21 14:50:03.168969883 +0000 UTC m=+965.245802912" Jan 21 14:50:03 crc kubenswrapper[4902]: I0121 14:50:03.197717 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" podStartSLOduration=45.197702493 podStartE2EDuration="45.197702493s" podCreationTimestamp="2026-01-21 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:03.196275984 +0000 UTC m=+965.273109013" watchObservedRunningTime="2026-01-21 14:50:03.197702493 +0000 UTC m=+965.274535522" Jan 21 14:50:04 crc kubenswrapper[4902]: I0121 14:50:04.000429 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:50:04 crc kubenswrapper[4902]: I0121 14:50:04.001772 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.684779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.687254 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-nh8zr" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.688624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-j6fwd" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.714276 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gffs4" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.840330 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-lttm9" Jan 21 14:50:07 crc kubenswrapper[4902]: I0121 14:50:07.884564 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-nqnfh" Jan 21 14:50:08 crc kubenswrapper[4902]: E0121 14:50:08.383582 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podUID="2ad74206-4131-4395-8392-9697c2c164eb" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.442175 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.446006 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-c2nb6" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.461731 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.487979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-x6xrb" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.511527 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-lljfd" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.528785 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-xrlqr" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.541063 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-pmvgc" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.598464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-wqmq2" Jan 21 14:50:08 crc kubenswrapper[4902]: I0121 14:50:08.887719 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-s8g8n" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.032988 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" event={"ID":"f3f5f576-48b8-4175-8d70-d8de7e41a63a","Type":"ContainerStarted","Data":"dcecb835864950ebd8334a279999396d40124aadbabe0012b23a303973500058"} Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.033232 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.034640 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" event={"ID":"14dc1630-021a-4b05-8ac4-d99368b51726","Type":"ContainerStarted","Data":"7771f68efe7eba1dc9be23fbe5f3b261cc56a27bcd0ef2e57933374230eda516"} Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.034782 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.036308 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" event={"ID":"cea39ffd-421f-4b74-9f26-065f49e00786","Type":"ContainerStarted","Data":"6055f7d9b64f6008acc16b0297fd873cf37c0b32a0aa26ccef0a1e3942c877c8"} Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.053374 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" podStartSLOduration=3.684785026 podStartE2EDuration="52.053351338s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.019813626 +0000 UTC m=+922.096646655" lastFinishedPulling="2026-01-21 14:50:08.388379918 +0000 UTC m=+970.465212967" observedRunningTime="2026-01-21 14:50:09.048525435 +0000 UTC m=+971.125358464" watchObservedRunningTime="2026-01-21 14:50:09.053351338 +0000 UTC m=+971.130184377" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.069819 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" podStartSLOduration=45.192816225 podStartE2EDuration="52.06980502s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:50:01.529550613 +0000 UTC m=+963.606383642" lastFinishedPulling="2026-01-21 14:50:08.406539388 +0000 UTC m=+970.483372437" observedRunningTime="2026-01-21 14:50:09.068256138 +0000 UTC m=+971.145089167" watchObservedRunningTime="2026-01-21 14:50:09.06980502 +0000 UTC m=+971.146638049" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.086851 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" podStartSLOduration=45.227184441 podStartE2EDuration="52.086830669s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:50:01.527799175 +0000 UTC m=+963.604632204" lastFinishedPulling="2026-01-21 14:50:08.387445403 +0000 UTC m=+970.464278432" observedRunningTime="2026-01-21 14:50:09.083318962 +0000 UTC m=+971.160151991" watchObservedRunningTime="2026-01-21 14:50:09.086830669 +0000 UTC m=+971.163663718" Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.296179 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:50:09 crc kubenswrapper[4902]: I0121 14:50:09.776124 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.042818 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" event={"ID":"0b55bf9c-cc65-446c-849e-035fb1bba4c4","Type":"ContainerStarted","Data":"51b295bbdb999ae1b25dc5563e00d9a2dd300ea7580635b04d9b954d8997d641"} Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.043013 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.045377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" event={"ID":"bc4c2749-7073-4bb8-8c87-736187565b08","Type":"ContainerStarted","Data":"75576ac01a0b8a2386b243533f80fbeb65d95f9a52786ec24f9f8977324ee7ba"} Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.059515 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" podStartSLOduration=3.726119932 podStartE2EDuration="53.059499671s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.416076595 +0000 UTC m=+922.492909624" lastFinishedPulling="2026-01-21 14:50:09.749456334 +0000 UTC m=+971.826289363" observedRunningTime="2026-01-21 14:50:10.057937069 +0000 UTC m=+972.134770098" watchObservedRunningTime="2026-01-21 14:50:10.059499671 +0000 UTC m=+972.136332700" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.073189 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" podStartSLOduration=4.181395923 podStartE2EDuration="53.073174267s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.01998623 +0000 UTC m=+922.096819259" lastFinishedPulling="2026-01-21 14:50:08.911764554 +0000 UTC m=+970.988597603" observedRunningTime="2026-01-21 14:50:10.070326729 +0000 UTC m=+972.147159748" watchObservedRunningTime="2026-01-21 14:50:10.073174267 +0000 UTC m=+972.150007296" Jan 21 14:50:10 crc kubenswrapper[4902]: I0121 14:50:10.729346 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-hr66g" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.087124 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" event={"ID":"b01862fd-dfad-4a73-ac90-5ef7823c06ea","Type":"ContainerStarted","Data":"754aebabeccc0afb5b9bee9cb1062360dbb8fcf1e54ce97e639d072a0a79c540"} Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.088216 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.089258 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" event={"ID":"624ad6d5-5647-43c8-8e62-751e4c5989b3","Type":"ContainerStarted","Data":"d7f04251b633ca79874a460b36003c894296fd04f789c0208ab9106bd530325e"} Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.089571 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.148230 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" podStartSLOduration=3.739297341 podStartE2EDuration="57.148203171s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.190950433 +0000 UTC m=+922.267783462" lastFinishedPulling="2026-01-21 14:50:13.599856263 +0000 UTC m=+975.676689292" observedRunningTime="2026-01-21 14:50:14.122441314 +0000 UTC m=+976.199274353" watchObservedRunningTime="2026-01-21 14:50:14.148203171 +0000 UTC m=+976.225036200" Jan 21 14:50:14 crc kubenswrapper[4902]: I0121 14:50:14.149313 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" podStartSLOduration=3.860634408 podStartE2EDuration="57.149304401s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.43591003 +0000 UTC m=+922.512743049" lastFinishedPulling="2026-01-21 14:50:13.724580013 +0000 UTC m=+975.801413042" observedRunningTime="2026-01-21 14:50:14.146100372 +0000 UTC m=+976.222933411" watchObservedRunningTime="2026-01-21 14:50:14.149304401 +0000 UTC m=+976.226137430" Jan 21 14:50:14 crc kubenswrapper[4902]: E0121 14:50:14.298880 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podUID="1ffd452b-d331-4c80-a6f6-0b1b21d5fd84" Jan 21 14:50:15 crc kubenswrapper[4902]: I0121 14:50:15.099086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" event={"ID":"7d33c2a4-c369-4a5f-9592-289c162f095c","Type":"ContainerStarted","Data":"0bb6bf1a37d7432f0da1b46e19ba2832e320d64cce0ebe0d38767074f6bb612b"} Jan 21 14:50:15 crc kubenswrapper[4902]: I0121 14:50:15.100235 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:50:15 crc kubenswrapper[4902]: I0121 14:50:15.126791 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" podStartSLOduration=3.635189125 podStartE2EDuration="58.126772951s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.436199798 +0000 UTC m=+922.513032827" lastFinishedPulling="2026-01-21 14:50:14.927783624 +0000 UTC m=+977.004616653" observedRunningTime="2026-01-21 14:50:15.118806929 +0000 UTC m=+977.195639968" watchObservedRunningTime="2026-01-21 14:50:15.126772951 +0000 UTC m=+977.203605980" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.033265 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.036022 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-sdkxs" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.069898 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-khcxt" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.463102 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-nql9r" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.612585 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-8vfnj" Jan 21 14:50:18 crc kubenswrapper[4902]: I0121 14:50:18.729254 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-gn5kf" Jan 21 14:50:19 crc kubenswrapper[4902]: I0121 14:50:19.784325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-46xm9" Jan 21 14:50:20 crc kubenswrapper[4902]: I0121 14:50:20.269993 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986dhp6x8" Jan 21 14:50:25 crc kubenswrapper[4902]: I0121 14:50:25.176409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" event={"ID":"2ad74206-4131-4395-8392-9697c2c164eb","Type":"ContainerStarted","Data":"21f24a1ffaa9b183d180a35af678bf424b03f1f06423693fe4451e2bb4d418f1"} Jan 21 14:50:25 crc kubenswrapper[4902]: I0121 14:50:25.177086 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:50:25 crc kubenswrapper[4902]: I0121 14:50:25.193963 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" podStartSLOduration=4.088624213 podStartE2EDuration="1m8.19394774s" podCreationTimestamp="2026-01-21 14:49:17 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.435928091 +0000 UTC m=+922.512761120" lastFinishedPulling="2026-01-21 14:50:24.541251608 +0000 UTC m=+986.618084647" observedRunningTime="2026-01-21 14:50:25.191694818 +0000 UTC m=+987.268527837" watchObservedRunningTime="2026-01-21 14:50:25.19394774 +0000 UTC m=+987.270780769" Jan 21 14:50:26 crc kubenswrapper[4902]: I0121 14:50:26.186234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" event={"ID":"1ffd452b-d331-4c80-a6f6-0b1b21d5fd84","Type":"ContainerStarted","Data":"dc60dd8fd66f1b704d4a96c680696c63dec7df7ad5e877df36b052092985150e"} Jan 21 14:50:26 crc kubenswrapper[4902]: I0121 14:50:26.207387 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s7vgs" podStartSLOduration=2.909968971 podStartE2EDuration="1m8.20736287s" podCreationTimestamp="2026-01-21 14:49:18 +0000 UTC" firstStartedPulling="2026-01-21 14:49:20.435694364 +0000 UTC m=+922.512527393" lastFinishedPulling="2026-01-21 14:50:25.733088253 +0000 UTC m=+987.809921292" observedRunningTime="2026-01-21 14:50:26.200360945 +0000 UTC m=+988.277194044" watchObservedRunningTime="2026-01-21 14:50:26.20736287 +0000 UTC m=+988.284195909" Jan 21 14:50:28 crc kubenswrapper[4902]: I0121 14:50:28.665394 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-qwcvn" Jan 21 14:50:38 crc kubenswrapper[4902]: I0121 14:50:38.780634 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-v7bj9" Jan 21 14:50:47 crc kubenswrapper[4902]: I0121 14:50:47.769585 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:50:47 crc kubenswrapper[4902]: I0121 14:50:47.769877 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.842496 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.845982 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.850349 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.851557 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.854337 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.854504 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6mhmv" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.854664 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.855719 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.961549 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.961993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:54 crc kubenswrapper[4902]: I0121 14:50:54.962102 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.063000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.063082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.063150 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.064182 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.064196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.082823 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"dnsmasq-dns-5f854695bc-9cn2p\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.165524 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:50:55 crc kubenswrapper[4902]: I0121 14:50:55.641406 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.271659 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.273238 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.282212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.282283 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.282453 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.285816 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.384903 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.384985 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.385026 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.385944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.387071 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.420081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"dnsmasq-dns-744ffd65bc-kldb7\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.455164 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" event={"ID":"15a1e285-4b20-4390-8e14-d9d2f0101c71","Type":"ContainerStarted","Data":"d9d43db0013e5ca723cebb5651583404b69e5c48b2b76ea7263f612afe6ee4be"} Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.592671 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:50:56 crc kubenswrapper[4902]: I0121 14:50:56.981241 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.003621 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.004721 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.020007 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.059809 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.193508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.193570 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.193757 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.295790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.295851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.295922 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.296797 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.297926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.320747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"dnsmasq-dns-95f5f6995-jnphw\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.347525 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.445697 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.447338 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455352 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455407 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455639 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455723 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455789 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.455983 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.456028 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9m6fj" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.462609 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.481229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerStarted","Data":"5296913110c392e15b54a0f987eb61dded57186e36461bf1b89e97184d22ce54"} Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.601608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602182 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602238 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602264 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.602489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604306 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604478 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604519 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.604564 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.705864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.705948 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.705984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706016 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706085 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706135 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706158 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706193 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.706745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.707592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709077 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709474 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.709953 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.720021 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.720954 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.721178 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.725945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.728941 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.729873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.795598 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:50:57 crc kubenswrapper[4902]: I0121 14:50:57.845889 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:50:57 crc kubenswrapper[4902]: W0121 14:50:57.853527 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056e5d1c_0c8e_4988_8e3d_bd133023ce30.slice/crio-ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc WatchSource:0}: Error finding container ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc: Status 404 returned error can't find the container with id ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.139200 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.140778 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143428 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143597 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143771 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.143924 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.144126 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.144277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dc6mx" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.144305 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.149330 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.260660 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314014 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314151 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314236 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314259 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314300 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314345 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.314375 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.423579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424111 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424511 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424545 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424570 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424601 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424679 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424712 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.424842 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.425096 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.426380 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.428758 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.431319 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433219 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433430 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433541 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.433663 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.436466 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.438323 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.438899 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.449947 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.450069 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.472770 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.473012 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.473683 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.489665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.513520 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" event={"ID":"056e5d1c-0c8e-4988-8e3d-bd133023ce30","Type":"ContainerStarted","Data":"ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc"} Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.515133 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerStarted","Data":"08ee02c4a3aa1bd9f0c6f8daed756e3d6ec0c75c1f2a0da20740a10a51dd17d5"} Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.783263 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dc6mx" Jan 21 14:50:58 crc kubenswrapper[4902]: I0121 14:50:58.792280 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.273362 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:50:59 crc kubenswrapper[4902]: W0121 14:50:59.276802 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7103bd_b24b_4a0c_b68a_17373307f1aa.slice/crio-43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac WatchSource:0}: Error finding container 43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac: Status 404 returned error can't find the container with id 43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.485485 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.502859 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.506607 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.507234 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.512249 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.512354 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.513124 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-z2s6n" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.515684 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.533138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerStarted","Data":"43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac"} Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.667916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.668187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.668218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671561 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671650 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.671726 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.672031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774084 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774135 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774168 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774196 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774251 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.774716 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.789242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.789537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.791242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.792446 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.828809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.829738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.832137 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:50:59 crc kubenswrapper[4902]: I0121 14:50:59.835171 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " pod="openstack/openstack-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.132885 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.803205 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.804704 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.811821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-v26k8" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.812127 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.812516 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.812528 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.816524 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991109 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991220 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.991250 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994142 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994222 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994271 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:00 crc kubenswrapper[4902]: I0121 14:51:00.994335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.082849 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.083894 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.085922 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-f7vd6" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.086188 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.093266 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.093844 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.099882 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100094 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100243 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100358 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100659 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.100778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.107818 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111137 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111468 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111805 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.111916 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.112875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.144281 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.149422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202324 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202488 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202542 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.202602 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.214853 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303412 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303452 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.303471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.304244 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.304855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.310564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.315336 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.320966 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"memcached-0\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.477749 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:51:01 crc kubenswrapper[4902]: I0121 14:51:01.479122 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.920133 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.921207 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.924543 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h8x8w" Jan 21 14:51:02 crc kubenswrapper[4902]: I0121 14:51:02.934406 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.096036 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"kube-state-metrics-0\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " pod="openstack/kube-state-metrics-0" Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.200379 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"kube-state-metrics-0\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " pod="openstack/kube-state-metrics-0" Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.221384 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"kube-state-metrics-0\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " pod="openstack/kube-state-metrics-0" Jan 21 14:51:03 crc kubenswrapper[4902]: I0121 14:51:03.237129 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.096929 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.099258 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.101945 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kfz9n" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.110089 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.110318 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.113252 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.120202 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.123199 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.136680 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201736 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201855 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201917 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.201990 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202029 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202069 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202161 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202200 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202261 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.202287 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303304 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303351 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303410 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303432 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303568 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.303974 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.309142 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.309573 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.313208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.318158 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.319237 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320366 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320381 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.320458 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.321516 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.321530 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"ovn-controller-kxwsm\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.322407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"ovn-controller-ovs-4sm9h\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.426499 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:06 crc kubenswrapper[4902]: I0121 14:51:06.441606 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.583630 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.585733 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.591837 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.591950 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.591855 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.592109 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.592256 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8qgzz" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.663224 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724717 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724759 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724780 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.724986 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725023 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725102 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.725168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868500 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868531 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868556 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868677 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868704 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868727 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.868913 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.871151 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.871517 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.871867 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.890880 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.891445 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.892897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.896030 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.906774 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:07 crc kubenswrapper[4902]: I0121 14:51:07.962127 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.619497 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.625501 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629295 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8bqds" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629343 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629890 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.629910 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.648462 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726243 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726360 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726397 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726421 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726442 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726477 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.726523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.827973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828102 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828145 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828233 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828272 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828303 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828387 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.828664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.829173 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.829665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.830337 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.835430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.835752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.836103 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.860515 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.869553 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"ovsdbserver-sb-0\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:10 crc kubenswrapper[4902]: I0121 14:51:10.948377 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.484396 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.484884 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sth8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(67f50f65-9151-4444-9680-f86e0f256069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.486567 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" Jan 21 14:51:14 crc kubenswrapper[4902]: E0121 14:51:14.687012 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" Jan 21 14:51:17 crc kubenswrapper[4902]: I0121 14:51:17.769687 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:51:17 crc kubenswrapper[4902]: I0121 14:51:17.770026 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:51:19 crc kubenswrapper[4902]: I0121 14:51:19.632093 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.088197 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.088695 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl6z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-9cn2p_openstack(15a1e285-4b20-4390-8e14-d9d2f0101c71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.089988 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" podUID="15a1e285-4b20-4390-8e14-d9d2f0101c71" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.112036 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.112165 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7rwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-kldb7_openstack(d70f1f30-fc0e-48a8-a7b7-cf43c23331e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.113445 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.122173 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.122293 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccpbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-jnphw_openstack(056e5d1c-0c8e-4988-8e3d-bd133023ce30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.217118 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" Jan 21 14:51:20 crc kubenswrapper[4902]: I0121 14:51:20.726703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerStarted","Data":"bf2f4711a987253bd77a78040ec2bd0cf16012bd15444fb1b640251be787c875"} Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.730257 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" Jan 21 14:51:20 crc kubenswrapper[4902]: E0121 14:51:20.730786 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" Jan 21 14:51:20 crc kubenswrapper[4902]: I0121 14:51:20.934520 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 14:51:20 crc kubenswrapper[4902]: I0121 14:51:20.981636 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.038536 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.123586 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:51:21 crc kubenswrapper[4902]: W0121 14:51:21.135941 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8135258_f03d_4c9a_be6f_7dd1dd099188.slice/crio-4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c WatchSource:0}: Error finding container 4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c: Status 404 returned error can't find the container with id 4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.337235 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: W0121 14:51:21.352352 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaf5f1ad_bafb_4a54_b8fd_503d1a3a5fd3.slice/crio-2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb WatchSource:0}: Error finding container 2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb: Status 404 returned error can't find the container with id 2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.383900 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.387895 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.473528 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") pod \"15a1e285-4b20-4390-8e14-d9d2f0101c71\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.473575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") pod \"15a1e285-4b20-4390-8e14-d9d2f0101c71\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.473632 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") pod \"15a1e285-4b20-4390-8e14-d9d2f0101c71\" (UID: \"15a1e285-4b20-4390-8e14-d9d2f0101c71\") " Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.474571 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15a1e285-4b20-4390-8e14-d9d2f0101c71" (UID: "15a1e285-4b20-4390-8e14-d9d2f0101c71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.474993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config" (OuterVolumeSpecName: "config") pod "15a1e285-4b20-4390-8e14-d9d2f0101c71" (UID: "15a1e285-4b20-4390-8e14-d9d2f0101c71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.481944 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6" (OuterVolumeSpecName: "kube-api-access-tl6z6") pod "15a1e285-4b20-4390-8e14-d9d2f0101c71" (UID: "15a1e285-4b20-4390-8e14-d9d2f0101c71"). InnerVolumeSpecName "kube-api-access-tl6z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.484181 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:51:21 crc kubenswrapper[4902]: W0121 14:51:21.487008 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa512c9_b91a_4a30_8a23_548ef53b094e.slice/crio-802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8 WatchSource:0}: Error finding container 802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8: Status 404 returned error can't find the container with id 802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8 Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.575737 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.575767 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl6z6\" (UniqueName: \"kubernetes.io/projected/15a1e285-4b20-4390-8e14-d9d2f0101c71-kube-api-access-tl6z6\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.575777 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15a1e285-4b20-4390-8e14-d9d2f0101c71-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.736646 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerStarted","Data":"7b6bfe3f7296114e25ecf2caceede712b35695e06d9545a4b2270d1cce053ea2"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.737572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" event={"ID":"15a1e285-4b20-4390-8e14-d9d2f0101c71","Type":"ContainerDied","Data":"d9d43db0013e5ca723cebb5651583404b69e5c48b2b76ea7263f612afe6ee4be"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.737604 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-9cn2p" Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.738533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerStarted","Data":"2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.739557 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerStarted","Data":"012af9c88121ed6a56a653b1c142d5e67759c3d8ac9efeda00265ffdb3f91980"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.740425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerStarted","Data":"83455c4bf3aeb7b7c76443c4b9198dde4cf810334ccfb634a4b5c17df6d13e97"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.741327 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerStarted","Data":"0e2225caf36121574255d90227f9966e2a981074b953f7b34948ace2a7d9beae"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.742668 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.744106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerStarted","Data":"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.745553 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerStarted","Data":"4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c"} Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.815827 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:51:21 crc kubenswrapper[4902]: I0121 14:51:21.823625 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-9cn2p"] Jan 21 14:51:22 crc kubenswrapper[4902]: I0121 14:51:22.311849 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a1e285-4b20-4390-8e14-d9d2f0101c71" path="/var/lib/kubelet/pods/15a1e285-4b20-4390-8e14-d9d2f0101c71/volumes" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.930308 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerStarted","Data":"9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.931803 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerStarted","Data":"339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.931904 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kxwsm" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.933529 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerStarted","Data":"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.933683 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.936106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerStarted","Data":"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.936311 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.937784 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerStarted","Data":"00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.939867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerStarted","Data":"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.944650 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerStarted","Data":"9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.946564 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb"} Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.958156 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kxwsm" podStartSLOduration=15.846521812 podStartE2EDuration="22.95813955s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.141170575 +0000 UTC m=+1043.218003604" lastFinishedPulling="2026-01-21 14:51:28.252788313 +0000 UTC m=+1050.329621342" observedRunningTime="2026-01-21 14:51:28.948774569 +0000 UTC m=+1051.025607598" watchObservedRunningTime="2026-01-21 14:51:28.95813955 +0000 UTC m=+1051.034972579" Jan 21 14:51:28 crc kubenswrapper[4902]: I0121 14:51:28.975124 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.692774964 podStartE2EDuration="27.975103782s" podCreationTimestamp="2026-01-21 14:51:01 +0000 UTC" firstStartedPulling="2026-01-21 14:51:20.965931949 +0000 UTC m=+1043.042764978" lastFinishedPulling="2026-01-21 14:51:28.248260767 +0000 UTC m=+1050.325093796" observedRunningTime="2026-01-21 14:51:28.970465123 +0000 UTC m=+1051.047298152" watchObservedRunningTime="2026-01-21 14:51:28.975103782 +0000 UTC m=+1051.051936811" Jan 21 14:51:29 crc kubenswrapper[4902]: I0121 14:51:29.046656 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.764505679 podStartE2EDuration="27.046629132s" podCreationTimestamp="2026-01-21 14:51:02 +0000 UTC" firstStartedPulling="2026-01-21 14:51:20.986192872 +0000 UTC m=+1043.063025901" lastFinishedPulling="2026-01-21 14:51:28.268316325 +0000 UTC m=+1050.345149354" observedRunningTime="2026-01-21 14:51:29.0418975 +0000 UTC m=+1051.118730539" watchObservedRunningTime="2026-01-21 14:51:29.046629132 +0000 UTC m=+1051.123462161" Jan 21 14:51:29 crc kubenswrapper[4902]: I0121 14:51:29.954288 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" exitCode=0 Jan 21 14:51:29 crc kubenswrapper[4902]: I0121 14:51:29.954363 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb"} Jan 21 14:51:30 crc kubenswrapper[4902]: I0121 14:51:30.967397 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8"} Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.975839 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerStarted","Data":"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1"} Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.976211 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.976226 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.977281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerStarted","Data":"fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931"} Jan 21 14:51:31 crc kubenswrapper[4902]: I0121 14:51:31.979475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerStarted","Data":"f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.030879 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4sm9h" podStartSLOduration=19.316401865 podStartE2EDuration="26.030860672s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.489407335 +0000 UTC m=+1043.566240364" lastFinishedPulling="2026-01-21 14:51:28.203866142 +0000 UTC m=+1050.280699171" observedRunningTime="2026-01-21 14:51:32.001422173 +0000 UTC m=+1054.078255222" watchObservedRunningTime="2026-01-21 14:51:32.030860672 +0000 UTC m=+1054.107693701" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.033668 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.889791386 podStartE2EDuration="23.033652489s" podCreationTimestamp="2026-01-21 14:51:09 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.400388338 +0000 UTC m=+1043.477221387" lastFinishedPulling="2026-01-21 14:51:31.544249421 +0000 UTC m=+1053.621082490" observedRunningTime="2026-01-21 14:51:32.030628625 +0000 UTC m=+1054.107461684" watchObservedRunningTime="2026-01-21 14:51:32.033652489 +0000 UTC m=+1054.110485518" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.329106 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.11345753 podStartE2EDuration="26.329084041s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.35483444 +0000 UTC m=+1043.431667459" lastFinishedPulling="2026-01-21 14:51:31.570460941 +0000 UTC m=+1053.647293970" observedRunningTime="2026-01-21 14:51:32.07893813 +0000 UTC m=+1054.155771159" watchObservedRunningTime="2026-01-21 14:51:32.329084041 +0000 UTC m=+1054.405917080" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.963773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.987759 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerStarted","Data":"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.990105 4902 generic.go:334] "Generic (PLEG): container finished" podID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" exitCode=0 Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.990180 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerDied","Data":"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.992503 4902 generic.go:334] "Generic (PLEG): container finished" podID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerID="9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d" exitCode=0 Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.992552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerDied","Data":"9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.994369 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerDied","Data":"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2"} Jan 21 14:51:32 crc kubenswrapper[4902]: I0121 14:51:32.994801 4902 generic.go:334] "Generic (PLEG): container finished" podID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" exitCode=0 Jan 21 14:51:33 crc kubenswrapper[4902]: I0121 14:51:33.243531 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.004744 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerStarted","Data":"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9"} Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.007246 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerStarted","Data":"6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a"} Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.010528 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerStarted","Data":"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5"} Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.011152 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.038796 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.887059694 podStartE2EDuration="36.038773454s" podCreationTimestamp="2026-01-21 14:50:58 +0000 UTC" firstStartedPulling="2026-01-21 14:51:20.116171053 +0000 UTC m=+1042.193004082" lastFinishedPulling="2026-01-21 14:51:28.267884813 +0000 UTC m=+1050.344717842" observedRunningTime="2026-01-21 14:51:34.034633539 +0000 UTC m=+1056.111466568" watchObservedRunningTime="2026-01-21 14:51:34.038773454 +0000 UTC m=+1056.115606493" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.058423 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.84749614 podStartE2EDuration="35.058406871s" podCreationTimestamp="2026-01-21 14:50:59 +0000 UTC" firstStartedPulling="2026-01-21 14:51:21.039222308 +0000 UTC m=+1043.116055337" lastFinishedPulling="2026-01-21 14:51:28.250133039 +0000 UTC m=+1050.326966068" observedRunningTime="2026-01-21 14:51:34.057714971 +0000 UTC m=+1056.134548010" watchObservedRunningTime="2026-01-21 14:51:34.058406871 +0000 UTC m=+1056.135239900" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.072885 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podStartSLOduration=2.398618788 podStartE2EDuration="38.072864263s" podCreationTimestamp="2026-01-21 14:50:56 +0000 UTC" firstStartedPulling="2026-01-21 14:50:57.062850488 +0000 UTC m=+1019.139683517" lastFinishedPulling="2026-01-21 14:51:32.737095963 +0000 UTC m=+1054.813928992" observedRunningTime="2026-01-21 14:51:34.072239645 +0000 UTC m=+1056.149072674" watchObservedRunningTime="2026-01-21 14:51:34.072864263 +0000 UTC m=+1056.149697302" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.949017 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.962961 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:34 crc kubenswrapper[4902]: I0121 14:51:34.988685 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.019704 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.019794 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.060331 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.335307 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.494437 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.495664 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.499791 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.515665 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.516681 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.522337 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.554565 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.567071 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608580 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608679 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608741 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608808 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608843 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608868 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608892 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608931 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.608961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.710884 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.710946 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.710981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711032 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711086 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711121 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711226 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.711606 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712287 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712372 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712703 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.712872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.716920 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.720057 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.735763 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"ovn-controller-metrics-c27gh\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.735871 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"dnsmasq-dns-5b79764b65-4rfxx\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.752799 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.785715 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.786929 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.788928 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.810767 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.881887 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.902730 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915309 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:35 crc kubenswrapper[4902]: I0121 14:51:35.915776 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.017729 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.017858 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.017937 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.018059 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.018138 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.164104 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.164626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.165314 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.165711 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.166588 4902 generic.go:334] "Generic (PLEG): container finished" podID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerID="0f5c9ee80727b9e8632f288b2b3d7cfcfa77af2c1b8caf9690b633d832028cb6" exitCode=0 Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.166684 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" event={"ID":"056e5d1c-0c8e-4988-8e3d-bd133023ce30","Type":"ContainerDied","Data":"0f5c9ee80727b9e8632f288b2b3d7cfcfa77af2c1b8caf9690b633d832028cb6"} Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.168207 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" containerID="cri-o://dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" gracePeriod=10 Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.173136 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"dnsmasq-dns-586b989cdc-8sbwv\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.236488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.392906 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.395666 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.422154 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.422928 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jlxv8" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.423151 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.423285 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.423402 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.429512 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470816 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470858 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470901 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.470945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.471006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.471053 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.487092 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572249 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572324 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572391 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.572549 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.575317 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.575857 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.576492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.577960 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.581524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.584035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.640032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"ovn-northd-0\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.667123 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:36 crc kubenswrapper[4902]: W0121 14:51:36.673694 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d63009_9822_4096_9bf1_8f71d4dacd7b.slice/crio-ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5 WatchSource:0}: Error finding container ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5: Status 404 returned error can't find the container with id ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5 Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.724619 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.744877 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.792036 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.875886 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") pod \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.876139 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") pod \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.876171 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") pod \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\" (UID: \"056e5d1c-0c8e-4988-8e3d-bd133023ce30\") " Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.882712 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs" (OuterVolumeSpecName: "kube-api-access-ccpbs") pod "056e5d1c-0c8e-4988-8e3d-bd133023ce30" (UID: "056e5d1c-0c8e-4988-8e3d-bd133023ce30"). InnerVolumeSpecName "kube-api-access-ccpbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.907519 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config" (OuterVolumeSpecName: "config") pod "056e5d1c-0c8e-4988-8e3d-bd133023ce30" (UID: "056e5d1c-0c8e-4988-8e3d-bd133023ce30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.909243 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "056e5d1c-0c8e-4988-8e3d-bd133023ce30" (UID: "056e5d1c-0c8e-4988-8e3d-bd133023ce30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.979398 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccpbs\" (UniqueName: \"kubernetes.io/projected/056e5d1c-0c8e-4988-8e3d-bd133023ce30-kube-api-access-ccpbs\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.979425 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.979434 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/056e5d1c-0c8e-4988-8e3d-bd133023ce30-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4902]: I0121 14:51:36.987634 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.080840 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") pod \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.080925 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") pod \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.081051 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") pod \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\" (UID: \"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9\") " Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.088492 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc" (OuterVolumeSpecName: "kube-api-access-g7rwc") pod "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" (UID: "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9"). InnerVolumeSpecName "kube-api-access-g7rwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.119151 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" (UID: "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.135042 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config" (OuterVolumeSpecName: "config") pod "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" (UID: "d70f1f30-fc0e-48a8-a7b7-cf43c23331e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.174430 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179317 4902 generic.go:334] "Generic (PLEG): container finished" podID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" exitCode=0 Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179395 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerDied","Data":"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" event={"ID":"d70f1f30-fc0e-48a8-a7b7-cf43c23331e9","Type":"ContainerDied","Data":"5296913110c392e15b54a0f987eb61dded57186e36461bf1b89e97184d22ce54"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179440 4902 scope.go:117] "RemoveContainer" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.179659 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-kldb7" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.184855 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.184878 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.184888 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7rwc\" (UniqueName: \"kubernetes.io/projected/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9-kube-api-access-g7rwc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.187256 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerStarted","Data":"1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.187295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerStarted","Data":"b07d2a04235629b220fbd6c246ba8a8b5088d31b321ecb0ba20c9950895f0f74"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.192249 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" event={"ID":"056e5d1c-0c8e-4988-8e3d-bd133023ce30","Type":"ContainerDied","Data":"ca0db72aa9304747531a3dcf3ad66d4587463b234d1c9123a01c6a7b05b94cfc"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.192342 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-jnphw" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.197710 4902 generic.go:334] "Generic (PLEG): container finished" podID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerID="1bfce2ecde4206400633bc9ed5a03f89132046bc198571a9ea9d8cdbe7e9aafa" exitCode=0 Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.198218 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerDied","Data":"1bfce2ecde4206400633bc9ed5a03f89132046bc198571a9ea9d8cdbe7e9aafa"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.198276 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerStarted","Data":"ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5"} Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.203718 4902 scope.go:117] "RemoveContainer" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.211310 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-c27gh" podStartSLOduration=2.211293222 podStartE2EDuration="2.211293222s" podCreationTimestamp="2026-01-21 14:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:37.201389477 +0000 UTC m=+1059.278222506" watchObservedRunningTime="2026-01-21 14:51:37.211293222 +0000 UTC m=+1059.288126251" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.241207 4902 scope.go:117] "RemoveContainer" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" Jan 21 14:51:37 crc kubenswrapper[4902]: E0121 14:51:37.242106 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5\": container with ID starting with dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5 not found: ID does not exist" containerID="dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.242192 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5"} err="failed to get container status \"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5\": rpc error: code = NotFound desc = could not find container \"dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5\": container with ID starting with dc35f19f3a1fce27a38748541476905a80db7b906f4b2a5743a52aa6879f94b5 not found: ID does not exist" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.242347 4902 scope.go:117] "RemoveContainer" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.247620 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:51:37 crc kubenswrapper[4902]: E0121 14:51:37.250347 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2\": container with ID starting with f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2 not found: ID does not exist" containerID="f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.250398 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2"} err="failed to get container status \"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2\": rpc error: code = NotFound desc = could not find container \"f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2\": container with ID starting with f49956a6a8aaab4287075b0d5dea23e1e32bd5448385906a6c3a40565425bcb2 not found: ID does not exist" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.250424 4902 scope.go:117] "RemoveContainer" containerID="0f5c9ee80727b9e8632f288b2b3d7cfcfa77af2c1b8caf9690b633d832028cb6" Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.300925 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-kldb7"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.341669 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.355894 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:51:37 crc kubenswrapper[4902]: I0121 14:51:37.364908 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-jnphw"] Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.206774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerStarted","Data":"fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.207122 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.211315 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" exitCode=0 Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.211360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerDied","Data":"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.211378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerStarted","Data":"88562bce194dc6ec93ebe7a9e6fd7cffd8f7caf51a0e036e6b3531ce6275c539"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.213583 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerStarted","Data":"710e2e791f44aa4a7534510792c8ca7893edb756d648bcd8efc2a038da9f4e30"} Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.228712 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" podStartSLOduration=3.228698223 podStartE2EDuration="3.228698223s" podCreationTimestamp="2026-01-21 14:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:38.226521823 +0000 UTC m=+1060.303354862" watchObservedRunningTime="2026-01-21 14:51:38.228698223 +0000 UTC m=+1060.305531252" Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.316419 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" path="/var/lib/kubelet/pods/056e5d1c-0c8e-4988-8e3d-bd133023ce30/volumes" Jan 21 14:51:38 crc kubenswrapper[4902]: I0121 14:51:38.317234 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" path="/var/lib/kubelet/pods/d70f1f30-fc0e-48a8-a7b7-cf43c23331e9/volumes" Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.224475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerStarted","Data":"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d"} Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.224816 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.227712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerStarted","Data":"e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db"} Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.227753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerStarted","Data":"c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3"} Jan 21 14:51:39 crc kubenswrapper[4902]: I0121 14:51:39.243659 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" podStartSLOduration=4.243636245 podStartE2EDuration="4.243636245s" podCreationTimestamp="2026-01-21 14:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:39.241549937 +0000 UTC m=+1061.318382976" watchObservedRunningTime="2026-01-21 14:51:39.243636245 +0000 UTC m=+1061.320469274" Jan 21 14:51:40 crc kubenswrapper[4902]: I0121 14:51:40.136912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 14:51:40 crc kubenswrapper[4902]: I0121 14:51:40.136997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 14:51:40 crc kubenswrapper[4902]: I0121 14:51:40.234517 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.479322 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.479742 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.678363 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.700001 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.660396658 podStartE2EDuration="5.699982785s" podCreationTimestamp="2026-01-21 14:51:36 +0000 UTC" firstStartedPulling="2026-01-21 14:51:37.373415384 +0000 UTC m=+1059.450248413" lastFinishedPulling="2026-01-21 14:51:38.413001511 +0000 UTC m=+1060.489834540" observedRunningTime="2026-01-21 14:51:39.265296228 +0000 UTC m=+1061.342129277" watchObservedRunningTime="2026-01-21 14:51:41.699982785 +0000 UTC m=+1063.776815814" Jan 21 14:51:41 crc kubenswrapper[4902]: I0121 14:51:41.995784 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 14:51:42 crc kubenswrapper[4902]: I0121 14:51:42.079114 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 14:51:42 crc kubenswrapper[4902]: I0121 14:51:42.308749 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.354627 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.355143 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" containerID="cri-o://bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" gracePeriod=10 Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.372690 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424139 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:51:43 crc kubenswrapper[4902]: E0121 14:51:43.424545 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424562 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" Jan 21 14:51:43 crc kubenswrapper[4902]: E0121 14:51:43.424589 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424595 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: E0121 14:51:43.424612 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424618 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424780 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70f1f30-fc0e-48a8-a7b7-cf43c23331e9" containerName="dnsmasq-dns" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.424795 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="056e5d1c-0c8e-4988-8e3d-bd133023ce30" containerName="init" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.425597 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.441608 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529543 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529626 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529683 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.529729 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.630904 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631247 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631291 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.631764 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.632002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.632393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.632397 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.672686 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"dnsmasq-dns-67fdf7998c-xglm5\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.811774 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.892766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.935576 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.935957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.936020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.936051 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:43 crc kubenswrapper[4902]: I0121 14:51:43.936140 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") pod \"6fbfbb64-2e43-4c95-b011-bec06204855d\" (UID: \"6fbfbb64-2e43-4c95-b011-bec06204855d\") " Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.044193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w" (OuterVolumeSpecName: "kube-api-access-vbz4w") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "kube-api-access-vbz4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.061484 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.063450 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config" (OuterVolumeSpecName: "config") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.074506 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.086605 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fbfbb64-2e43-4c95-b011-bec06204855d" (UID: "6fbfbb64-2e43-4c95-b011-bec06204855d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139236 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139266 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbz4w\" (UniqueName: \"kubernetes.io/projected/6fbfbb64-2e43-4c95-b011-bec06204855d-kube-api-access-vbz4w\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139275 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139285 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.139293 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fbfbb64-2e43-4c95-b011-bec06204855d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260322 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" exitCode=0 Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260379 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260443 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerDied","Data":"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d"} Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-8sbwv" event={"ID":"6fbfbb64-2e43-4c95-b011-bec06204855d","Type":"ContainerDied","Data":"88562bce194dc6ec93ebe7a9e6fd7cffd8f7caf51a0e036e6b3531ce6275c539"} Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.260505 4902 scope.go:117] "RemoveContainer" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.279541 4902 scope.go:117] "RemoveContainer" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.293390 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.304910 4902 scope.go:117] "RemoveContainer" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.305383 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d\": container with ID starting with bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d not found: ID does not exist" containerID="bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.305420 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d"} err="failed to get container status \"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d\": rpc error: code = NotFound desc = could not find container \"bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d\": container with ID starting with bd48fd0041152517be8a51464273237379075a36c1a233810c429e621846c94d not found: ID does not exist" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.305467 4902 scope.go:117] "RemoveContainer" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.305903 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c\": container with ID starting with b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c not found: ID does not exist" containerID="b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.305926 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c"} err="failed to get container status \"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c\": rpc error: code = NotFound desc = could not find container \"b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c\": container with ID starting with b2e9ea7c18b393632fe84eb829e1d3e7e911dd393dc030fbec80ac0c2b50440c not found: ID does not exist" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.306534 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-8sbwv"] Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.366149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:51:44 crc kubenswrapper[4902]: W0121 14:51:44.368663 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26a414c_0df3_4829_ad7a_c444b795160a.slice/crio-f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c WatchSource:0}: Error finding container f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c: Status 404 returned error can't find the container with id f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.495648 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.496329 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="init" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.496351 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="init" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.496392 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.496402 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.496597 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" containerName="dnsmasq-dns" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.502443 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.505344 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.505680 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.505949 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.506004 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-s2887" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.531346 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546411 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546610 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.546647 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.647933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648042 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648119 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.648421 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.648523 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648554 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648659 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: E0121 14:51:44.648654 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:45.148635354 +0000 UTC m=+1067.225468383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.648916 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.649256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.670955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:44 crc kubenswrapper[4902]: I0121 14:51:44.671515 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.143247 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.144139 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.146488 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.146560 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.146624 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.157717 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:45 crc kubenswrapper[4902]: E0121 14:51:45.157851 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:45 crc kubenswrapper[4902]: E0121 14:51:45.157877 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:45 crc kubenswrapper[4902]: E0121 14:51:45.157933 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:46.157914626 +0000 UTC m=+1068.234747645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.169796 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.258894 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.258946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.258967 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259024 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259113 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259146 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.259183 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.272100 4902 generic.go:334] "Generic (PLEG): container finished" podID="f26a414c-0df3-4829-ad7a-c444b795160a" containerID="e19fecd53265fa377cce915a6f9d5418debd0cc0619facc38c21547ed0d4b095" exitCode=0 Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.272436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerDied","Data":"e19fecd53265fa377cce915a6f9d5418debd0cc0619facc38c21547ed0d4b095"} Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.272460 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerStarted","Data":"f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c"} Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.361165 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362262 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362294 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.362336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.363345 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.364066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.366260 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.367847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.372444 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.379871 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"swift-ring-rebalance-hmcs2\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.495859 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:51:45 crc kubenswrapper[4902]: I0121 14:51:45.883717 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.253312 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:46 crc kubenswrapper[4902]: E0121 14:51:46.253490 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:46 crc kubenswrapper[4902]: E0121 14:51:46.253516 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:46 crc kubenswrapper[4902]: E0121 14:51:46.253576 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:48.253557453 +0000 UTC m=+1070.330390472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.305248 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbfbb64-2e43-4c95-b011-bec06204855d" path="/var/lib/kubelet/pods/6fbfbb64-2e43-4c95-b011-bec06204855d/volumes" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.474729 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:51:46 crc kubenswrapper[4902]: W0121 14:51:46.477213 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9959d508_3783_403a_bdd6_65159821fc9e.slice/crio-fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647 WatchSource:0}: Error finding container fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647: Status 404 returned error can't find the container with id fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647 Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.788654 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.789953 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.792205 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.801005 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.818450 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.820344 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.837079 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.860894 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.861177 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.861352 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.861462 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963003 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963093 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963138 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.963163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.964018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.964609 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.982816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"glance-db-create-62fdp\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " pod="openstack/glance-db-create-62fdp" Jan 21 14:51:46 crc kubenswrapper[4902]: I0121 14:51:46.986895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"glance-a0c6-account-create-update-g2pwx\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.109866 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.139501 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.312463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerStarted","Data":"fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647"} Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.421125 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:51:47 crc kubenswrapper[4902]: W0121 14:51:47.426528 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22787b52_e166_415c_906e_788b1b73ccd0.slice/crio-85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40 WatchSource:0}: Error finding container 85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40: Status 404 returned error can't find the container with id 85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40 Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.769989 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.770116 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.770196 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.771315 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.771422 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1" gracePeriod=600 Jan 21 14:51:47 crc kubenswrapper[4902]: I0121 14:51:47.819710 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:51:47 crc kubenswrapper[4902]: W0121 14:51:47.822458 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a55b324_126b_4571_a2ab_1ea8005e3c46.slice/crio-d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805 WatchSource:0}: Error finding container d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805: Status 404 returned error can't find the container with id d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805 Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.302822 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:48 crc kubenswrapper[4902]: E0121 14:51:48.303076 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:48 crc kubenswrapper[4902]: E0121 14:51:48.303109 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:48 crc kubenswrapper[4902]: E0121 14:51:48.303172 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:51:52.303152076 +0000 UTC m=+1074.379985115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.322846 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62fdp" event={"ID":"9a55b324-126b-4571-a2ab-1ea8005e3c46","Type":"ContainerStarted","Data":"d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805"} Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.324161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0c6-account-create-update-g2pwx" event={"ID":"22787b52-e166-415c-906e-788b1b73ccd0","Type":"ContainerStarted","Data":"85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40"} Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.502622 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.507845 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.514775 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.518715 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.610083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.610209 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.711550 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.711680 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.712638 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.733388 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"root-account-create-update-gtbbh\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:48 crc kubenswrapper[4902]: I0121 14:51:48.864946 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.350747 4902 generic.go:334] "Generic (PLEG): container finished" podID="22787b52-e166-415c-906e-788b1b73ccd0" containerID="04e51686a115d7efa7ccafee00c3c35f348877ed4159bb02ef8fdec725c74808" exitCode=0 Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.350830 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0c6-account-create-update-g2pwx" event={"ID":"22787b52-e166-415c-906e-788b1b73ccd0","Type":"ContainerDied","Data":"04e51686a115d7efa7ccafee00c3c35f348877ed4159bb02ef8fdec725c74808"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.353857 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.354597 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerStarted","Data":"2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.355089 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.357525 4902 generic.go:334] "Generic (PLEG): container finished" podID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerID="f6b39c880fbd40f2782ed02884cfa856d1ecf3dfd90d97c9787d318a34cf7495" exitCode=0 Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.357588 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62fdp" event={"ID":"9a55b324-126b-4571-a2ab-1ea8005e3c46","Type":"ContainerDied","Data":"f6b39c880fbd40f2782ed02884cfa856d1ecf3dfd90d97c9787d318a34cf7495"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360537 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1" exitCode=0 Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360591 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360624 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369"} Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.360644 4902 scope.go:117] "RemoveContainer" containerID="097b55fd9fa87b27fef8f06ba3cbfef04c2339f11dc61a41eeced54a3451dbca" Jan 21 14:51:49 crc kubenswrapper[4902]: I0121 14:51:49.410094 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" podStartSLOduration=6.410075678 podStartE2EDuration="6.410075678s" podCreationTimestamp="2026-01-21 14:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:49.408541795 +0000 UTC m=+1071.485374834" watchObservedRunningTime="2026-01-21 14:51:49.410075678 +0000 UTC m=+1071.486908717" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.006765 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.008292 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.020912 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.089787 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.089916 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.121930 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.123282 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.125650 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.132238 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191144 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191235 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.191384 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.192401 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.241197 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"keystone-db-create-bdp9p\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.292404 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.292472 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.293533 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.324287 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.325567 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.331064 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"keystone-b2af-account-create-update-g4dvb\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.331935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.335776 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.353480 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.354527 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.357607 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.375166 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.394557 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.395444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.395588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.395727 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.446220 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.756327 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.756864 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.760359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.763014 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.763141 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.767031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.786092 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"placement-db-create-x9wcg\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.793788 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"placement-431b-account-create-update-trwhd\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.950420 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.976520 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:51 crc kubenswrapper[4902]: I0121 14:51:51.990176 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:52 crc kubenswrapper[4902]: I0121 14:51:52.336553 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:51:52 crc kubenswrapper[4902]: E0121 14:51:52.336776 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:51:52 crc kubenswrapper[4902]: E0121 14:51:52.336798 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:51:52 crc kubenswrapper[4902]: E0121 14:51:52.336844 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:52:00.336827648 +0000 UTC m=+1082.413660677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.400719 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62fdp" event={"ID":"9a55b324-126b-4571-a2ab-1ea8005e3c46","Type":"ContainerDied","Data":"d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805"} Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.400982 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2eb35fb5799ce1ef42682c3bb497116e47e162109bced6658a35474b3df1805" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.410096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gtbbh" event={"ID":"1954463b-8937-4042-a917-fe047862f4b8","Type":"ContainerStarted","Data":"1e9ffed32be9a49bc998cff74fdbc43e5ee1377e006d1bfc773044e302a7d8ed"} Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.411985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a0c6-account-create-update-g2pwx" event={"ID":"22787b52-e166-415c-906e-788b1b73ccd0","Type":"ContainerDied","Data":"85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40"} Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.412060 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85146753c8fc39ddffc18caf6df87521671191f1f4b39e27ecf1ba7e0f546b40" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.536931 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.612403 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") pod \"9a55b324-126b-4571-a2ab-1ea8005e3c46\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.612440 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") pod \"9a55b324-126b-4571-a2ab-1ea8005e3c46\" (UID: \"9a55b324-126b-4571-a2ab-1ea8005e3c46\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.613344 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a55b324-126b-4571-a2ab-1ea8005e3c46" (UID: "9a55b324-126b-4571-a2ab-1ea8005e3c46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.614921 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.619125 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8" (OuterVolumeSpecName: "kube-api-access-trqf8") pod "9a55b324-126b-4571-a2ab-1ea8005e3c46" (UID: "9a55b324-126b-4571-a2ab-1ea8005e3c46"). InnerVolumeSpecName "kube-api-access-trqf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.714949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") pod \"22787b52-e166-415c-906e-788b1b73ccd0\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715154 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") pod \"22787b52-e166-415c-906e-788b1b73ccd0\" (UID: \"22787b52-e166-415c-906e-788b1b73ccd0\") " Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715725 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trqf8\" (UniqueName: \"kubernetes.io/projected/9a55b324-126b-4571-a2ab-1ea8005e3c46-kube-api-access-trqf8\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715745 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a55b324-126b-4571-a2ab-1ea8005e3c46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.715914 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22787b52-e166-415c-906e-788b1b73ccd0" (UID: "22787b52-e166-415c-906e-788b1b73ccd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.719281 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs" (OuterVolumeSpecName: "kube-api-access-pfmhs") pod "22787b52-e166-415c-906e-788b1b73ccd0" (UID: "22787b52-e166-415c-906e-788b1b73ccd0"). InnerVolumeSpecName "kube-api-access-pfmhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.813269 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.817481 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22787b52-e166-415c-906e-788b1b73ccd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.817508 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmhs\" (UniqueName: \"kubernetes.io/projected/22787b52-e166-415c-906e-788b1b73ccd0-kube-api-access-pfmhs\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.938675 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.938935 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" containerID="cri-o://fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247" gracePeriod=10 Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.976763 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:51:53 crc kubenswrapper[4902]: I0121 14:51:53.987141 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.038287 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.046778 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:51:54 crc kubenswrapper[4902]: W0121 14:51:54.052071 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef10c95_ed5c_4479_b01f_8f956d478dcf.slice/crio-414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20 WatchSource:0}: Error finding container 414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20: Status 404 returned error can't find the container with id 414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20 Jan 21 14:51:54 crc kubenswrapper[4902]: W0121 14:51:54.065734 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56dceeb6_ebc6_44b8_aba5_5f203f1a8d5d.slice/crio-2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a WatchSource:0}: Error finding container 2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a: Status 404 returned error can't find the container with id 2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.424963 4902 generic.go:334] "Generic (PLEG): container finished" podID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerID="fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247" exitCode=0 Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.425017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerDied","Data":"fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.425460 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" event={"ID":"02d63009-9822-4096-9bf1-8f71d4dacd7b","Type":"ContainerDied","Data":"ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.425475 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef2c29276f6c62af6940939c2d275db20eac9e33e286538b85878bd0fb0e83e5" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.427594 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerStarted","Data":"f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.427619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerStarted","Data":"414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.432787 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" exitCode=0 Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.432841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerDied","Data":"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.436607 4902 generic.go:334] "Generic (PLEG): container finished" podID="1954463b-8937-4042-a917-fe047862f4b8" containerID="183c9aacc3759e23732dbe091d0a8125502d61ad06cbf81f3beb450ef89e7614" exitCode=0 Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.436727 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gtbbh" event={"ID":"1954463b-8937-4042-a917-fe047862f4b8","Type":"ContainerDied","Data":"183c9aacc3759e23732dbe091d0a8125502d61ad06cbf81f3beb450ef89e7614"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.439207 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerStarted","Data":"29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.447977 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-431b-account-create-update-trwhd" podStartSLOduration=3.447953341 podStartE2EDuration="3.447953341s" podCreationTimestamp="2026-01-21 14:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.44610647 +0000 UTC m=+1076.522939499" watchObservedRunningTime="2026-01-21 14:51:54.447953341 +0000 UTC m=+1076.524786370" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.448799 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerStarted","Data":"0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.448849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerStarted","Data":"5cb8cc3872ae580644ce626d0d89d1daf3e291701338bc5d3629a7cd3738096c"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.453938 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerStarted","Data":"7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.453976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerStarted","Data":"4390a64682acbf30933444954ba3902efa753868aaafded59aeec375b82f230e"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.462207 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a0c6-account-create-update-g2pwx" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.466060 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerStarted","Data":"2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.466101 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerStarted","Data":"2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a"} Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.466169 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62fdp" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.498113 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.515772 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hmcs2" podStartSLOduration=2.722722963 podStartE2EDuration="9.515755798s" podCreationTimestamp="2026-01-21 14:51:45 +0000 UTC" firstStartedPulling="2026-01-21 14:51:46.479964863 +0000 UTC m=+1068.556797892" lastFinishedPulling="2026-01-21 14:51:53.272997698 +0000 UTC m=+1075.349830727" observedRunningTime="2026-01-21 14:51:54.508163627 +0000 UTC m=+1076.584996656" watchObservedRunningTime="2026-01-21 14:51:54.515755798 +0000 UTC m=+1076.592588827" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.530360 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b2af-account-create-update-g4dvb" podStartSLOduration=3.530342434 podStartE2EDuration="3.530342434s" podCreationTimestamp="2026-01-21 14:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.525259083 +0000 UTC m=+1076.602092112" watchObservedRunningTime="2026-01-21 14:51:54.530342434 +0000 UTC m=+1076.607175453" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.532895 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.532952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.533019 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.533103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") pod \"02d63009-9822-4096-9bf1-8f71d4dacd7b\" (UID: \"02d63009-9822-4096-9bf1-8f71d4dacd7b\") " Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.545337 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg" (OuterVolumeSpecName: "kube-api-access-hq5hg") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "kube-api-access-hq5hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.576307 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-bdp9p" podStartSLOduration=4.576261692 podStartE2EDuration="4.576261692s" podCreationTimestamp="2026-01-21 14:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.544982291 +0000 UTC m=+1076.621815320" watchObservedRunningTime="2026-01-21 14:51:54.576261692 +0000 UTC m=+1076.653094741" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.592422 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-x9wcg" podStartSLOduration=3.592405711 podStartE2EDuration="3.592405711s" podCreationTimestamp="2026-01-21 14:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:54.580612933 +0000 UTC m=+1076.657445962" watchObservedRunningTime="2026-01-21 14:51:54.592405711 +0000 UTC m=+1076.669238740" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.606544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.610379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.621958 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config" (OuterVolumeSpecName: "config") pod "02d63009-9822-4096-9bf1-8f71d4dacd7b" (UID: "02d63009-9822-4096-9bf1-8f71d4dacd7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634918 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634946 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634958 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02d63009-9822-4096-9bf1-8f71d4dacd7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:54 crc kubenswrapper[4902]: I0121 14:51:54.634967 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq5hg\" (UniqueName: \"kubernetes.io/projected/02d63009-9822-4096-9bf1-8f71d4dacd7b-kube-api-access-hq5hg\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.470779 4902 generic.go:334] "Generic (PLEG): container finished" podID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerID="0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.470884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerDied","Data":"0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.474091 4902 generic.go:334] "Generic (PLEG): container finished" podID="8f05425e-47d3-4358-844c-9b661f254e22" containerID="7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.474141 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerDied","Data":"7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.476187 4902 generic.go:334] "Generic (PLEG): container finished" podID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerID="2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.476267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerDied","Data":"2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.478496 4902 generic.go:334] "Generic (PLEG): container finished" podID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerID="f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9" exitCode=0 Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.478576 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerDied","Data":"f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.480601 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerStarted","Data":"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70"} Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.480696 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-4rfxx" Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.525758 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.673066654 podStartE2EDuration="58.525733652s" podCreationTimestamp="2026-01-21 14:50:57 +0000 UTC" firstStartedPulling="2026-01-21 14:50:59.279476579 +0000 UTC m=+1021.356309608" lastFinishedPulling="2026-01-21 14:51:20.132143577 +0000 UTC m=+1042.208976606" observedRunningTime="2026-01-21 14:51:55.515271161 +0000 UTC m=+1077.592104200" watchObservedRunningTime="2026-01-21 14:51:55.525733652 +0000 UTC m=+1077.602566681" Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.643900 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.651991 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-4rfxx"] Jan 21 14:51:55 crc kubenswrapper[4902]: I0121 14:51:55.928860 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.058355 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") pod \"1954463b-8937-4042-a917-fe047862f4b8\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.058422 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") pod \"1954463b-8937-4042-a917-fe047862f4b8\" (UID: \"1954463b-8937-4042-a917-fe047862f4b8\") " Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.059744 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1954463b-8937-4042-a917-fe047862f4b8" (UID: "1954463b-8937-4042-a917-fe047862f4b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.070359 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz" (OuterVolumeSpecName: "kube-api-access-bjddz") pod "1954463b-8937-4042-a917-fe047862f4b8" (UID: "1954463b-8937-4042-a917-fe047862f4b8"). InnerVolumeSpecName "kube-api-access-bjddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.160910 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjddz\" (UniqueName: \"kubernetes.io/projected/1954463b-8937-4042-a917-fe047862f4b8-kube-api-access-bjddz\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.160952 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1954463b-8937-4042-a917-fe047862f4b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.307493 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" path="/var/lib/kubelet/pods/02d63009-9822-4096-9bf1-8f71d4dacd7b/volumes" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.492836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gtbbh" event={"ID":"1954463b-8937-4042-a917-fe047862f4b8","Type":"ContainerDied","Data":"1e9ffed32be9a49bc998cff74fdbc43e5ee1377e006d1bfc773044e302a7d8ed"} Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.492881 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9ffed32be9a49bc998cff74fdbc43e5ee1377e006d1bfc773044e302a7d8ed" Jan 21 14:51:56 crc kubenswrapper[4902]: I0121 14:51:56.492926 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gtbbh" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.011322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073118 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073353 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerName="mariadb-database-create" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073364 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerName="mariadb-database-create" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073383 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="init" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073388 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="init" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073398 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073404 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073413 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1954463b-8937-4042-a917-fe047862f4b8" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073419 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1954463b-8937-4042-a917-fe047862f4b8" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073427 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22787b52-e166-415c-906e-788b1b73ccd0" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073433 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="22787b52-e166-415c-906e-788b1b73ccd0" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: E0121 14:51:57.073454 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05425e-47d3-4358-844c-9b661f254e22" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073459 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05425e-47d3-4358-844c-9b661f254e22" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073607 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f05425e-47d3-4358-844c-9b661f254e22" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073620 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" containerName="mariadb-database-create" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073627 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d63009-9822-4096-9bf1-8f71d4dacd7b" containerName="dnsmasq-dns" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073645 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="22787b52-e166-415c-906e-788b1b73ccd0" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.073653 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1954463b-8937-4042-a917-fe047862f4b8" containerName="mariadb-account-create-update" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.074090 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.077697 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n2trw" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.077936 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.080379 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.175810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") pod \"8f05425e-47d3-4358-844c-9b661f254e22\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.175908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") pod \"8f05425e-47d3-4358-844c-9b661f254e22\" (UID: \"8f05425e-47d3-4358-844c-9b661f254e22\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176453 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.176774 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f05425e-47d3-4358-844c-9b661f254e22" (UID: "8f05425e-47d3-4358-844c-9b661f254e22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.178737 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.182257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc" (OuterVolumeSpecName: "kube-api-access-b4dfc") pod "8f05425e-47d3-4358-844c-9b661f254e22" (UID: "8f05425e-47d3-4358-844c-9b661f254e22"). InnerVolumeSpecName "kube-api-access-b4dfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.195280 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.203562 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277838 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") pod \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") pod \"eef10c95-ed5c-4479-b01f-8f956d478dcf\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277932 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") pod \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\" (UID: \"fd5b13a8-7950-40cf-9255-d2c9f34c6add\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277955 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") pod \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.277972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") pod \"eef10c95-ed5c-4479-b01f-8f956d478dcf\" (UID: \"eef10c95-ed5c-4479-b01f-8f956d478dcf\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278133 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") pod \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\" (UID: \"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d\") " Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278282 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278388 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278456 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278474 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278517 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4dfc\" (UniqueName: \"kubernetes.io/projected/8f05425e-47d3-4358-844c-9b661f254e22-kube-api-access-b4dfc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.278533 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f05425e-47d3-4358-844c-9b661f254e22-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.279490 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eef10c95-ed5c-4479-b01f-8f956d478dcf" (UID: "eef10c95-ed5c-4479-b01f-8f956d478dcf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.280150 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd5b13a8-7950-40cf-9255-d2c9f34c6add" (UID: "fd5b13a8-7950-40cf-9255-d2c9f34c6add"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.280888 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" (UID: "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.282936 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.282973 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.283507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f" (OuterVolumeSpecName: "kube-api-access-sn79f") pod "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" (UID: "56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d"). InnerVolumeSpecName "kube-api-access-sn79f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.283875 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc" (OuterVolumeSpecName: "kube-api-access-jbcnc") pod "fd5b13a8-7950-40cf-9255-d2c9f34c6add" (UID: "fd5b13a8-7950-40cf-9255-d2c9f34c6add"). InnerVolumeSpecName "kube-api-access-jbcnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.286392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms" (OuterVolumeSpecName: "kube-api-access-s2hms") pod "eef10c95-ed5c-4479-b01f-8f956d478dcf" (UID: "eef10c95-ed5c-4479-b01f-8f956d478dcf"). InnerVolumeSpecName "kube-api-access-s2hms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.286923 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.296865 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"glance-db-sync-ktqgj\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.379751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcnc\" (UniqueName: \"kubernetes.io/projected/fd5b13a8-7950-40cf-9255-d2c9f34c6add-kube-api-access-jbcnc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380029 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn79f\" (UniqueName: \"kubernetes.io/projected/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-kube-api-access-sn79f\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380064 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2hms\" (UniqueName: \"kubernetes.io/projected/eef10c95-ed5c-4479-b01f-8f956d478dcf-kube-api-access-s2hms\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380078 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380091 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5b13a8-7950-40cf-9255-d2c9f34c6add-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.380101 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eef10c95-ed5c-4479-b01f-8f956d478dcf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.492481 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.510961 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b2af-account-create-update-g4dvb" event={"ID":"8f05425e-47d3-4358-844c-9b661f254e22","Type":"ContainerDied","Data":"4390a64682acbf30933444954ba3902efa753868aaafded59aeec375b82f230e"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.511001 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4390a64682acbf30933444954ba3902efa753868aaafded59aeec375b82f230e" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.511727 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-g4dvb" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.512637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x9wcg" event={"ID":"56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d","Type":"ContainerDied","Data":"2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.512666 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2874be65f89a65a539647e2aeacf5ef7b77341fc540507819036a595a616e45a" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.512725 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x9wcg" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.522667 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-431b-account-create-update-trwhd" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.522967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-431b-account-create-update-trwhd" event={"ID":"eef10c95-ed5c-4479-b01f-8f956d478dcf","Type":"ContainerDied","Data":"414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.523007 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414ad226955c987ec7c025c549b229f68b66622db3c6c7377bf41c281090ba20" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.525019 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bdp9p" event={"ID":"fd5b13a8-7950-40cf-9255-d2c9f34c6add","Type":"ContainerDied","Data":"5cb8cc3872ae580644ce626d0d89d1daf3e291701338bc5d3629a7cd3738096c"} Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.525080 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bdp9p" Jan 21 14:51:57 crc kubenswrapper[4902]: I0121 14:51:57.525101 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb8cc3872ae580644ce626d0d89d1daf3e291701338bc5d3629a7cd3738096c" Jan 21 14:51:58 crc kubenswrapper[4902]: W0121 14:51:58.083456 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5e91bc_7b75_4275_b1b6_998431981fca.slice/crio-06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b WatchSource:0}: Error finding container 06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b: Status 404 returned error can't find the container with id 06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b Jan 21 14:51:58 crc kubenswrapper[4902]: I0121 14:51:58.083607 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:51:58 crc kubenswrapper[4902]: I0121 14:51:58.540571 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerStarted","Data":"06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b"} Jan 21 14:51:58 crc kubenswrapper[4902]: I0121 14:51:58.793136 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:51:59 crc kubenswrapper[4902]: I0121 14:51:59.787198 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:51:59 crc kubenswrapper[4902]: I0121 14:51:59.795125 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gtbbh"] Jan 21 14:52:00 crc kubenswrapper[4902]: I0121 14:52:00.302691 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1954463b-8937-4042-a917-fe047862f4b8" path="/var/lib/kubelet/pods/1954463b-8937-4042-a917-fe047862f4b8/volumes" Jan 21 14:52:00 crc kubenswrapper[4902]: I0121 14:52:00.431324 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:52:00 crc kubenswrapper[4902]: E0121 14:52:00.431949 4902 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:52:00 crc kubenswrapper[4902]: E0121 14:52:00.432024 4902 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:52:00 crc kubenswrapper[4902]: E0121 14:52:00.432139 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift podName:ee214fec-083a-4abd-b65e-003bccee24fa nodeName:}" failed. No retries permitted until 2026-01-21 14:52:16.432113108 +0000 UTC m=+1098.508946177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift") pod "swift-storage-0" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa") : configmap "swift-ring-files" not found Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.530759 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kxwsm" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:52:01 crc kubenswrapper[4902]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:52:01 crc kubenswrapper[4902]: > Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.586936 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.591499 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.818906 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:01 crc kubenswrapper[4902]: E0121 14:52:01.819331 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerName="mariadb-account-create-update" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819352 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerName="mariadb-account-create-update" Jan 21 14:52:01 crc kubenswrapper[4902]: E0121 14:52:01.819379 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819388 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: E0121 14:52:01.819407 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819415 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819588 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" containerName="mariadb-account-create-update" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819604 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.819610 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" containerName="mariadb-database-create" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.820116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.822264 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.832462 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854488 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854529 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.854573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956297 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956323 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956386 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956458 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.956583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:01 crc kubenswrapper[4902]: I0121 14:52:01.957357 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.068729 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.070713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"ovn-controller-kxwsm-config-s4np8\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.145558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:02 crc kubenswrapper[4902]: I0121 14:52:02.866912 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:02 crc kubenswrapper[4902]: W0121 14:52:02.875575 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b6eeccf_57f9_48ef_8b50_f1b3cdadd658.slice/crio-b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b WatchSource:0}: Error finding container b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b: Status 404 returned error can't find the container with id b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b Jan 21 14:52:03 crc kubenswrapper[4902]: I0121 14:52:03.591461 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-s4np8" event={"ID":"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658","Type":"ContainerStarted","Data":"b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.600305 4902 generic.go:334] "Generic (PLEG): container finished" podID="9959d508-3783-403a-bdd6-65159821fc9e" containerID="29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506" exitCode=0 Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.600386 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerDied","Data":"29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.602217 4902 generic.go:334] "Generic (PLEG): container finished" podID="67f50f65-9151-4444-9680-f86e0f256069" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" exitCode=0 Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.602276 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerDied","Data":"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.605580 4902 generic.go:334] "Generic (PLEG): container finished" podID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerID="4de70b4a162bef7d46289abd4a1b9363b5ded88ef279f8bfde6f5eb04e8068c8" exitCode=0 Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.605625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-s4np8" event={"ID":"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658","Type":"ContainerDied","Data":"4de70b4a162bef7d46289abd4a1b9363b5ded88ef279f8bfde6f5eb04e8068c8"} Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.811187 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.812105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.814873 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.818692 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.948935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:04 crc kubenswrapper[4902]: I0121 14:52:04.949003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.050658 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.050728 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.051499 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.072065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"root-account-create-update-gvjmj\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:05 crc kubenswrapper[4902]: I0121 14:52:05.139719 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:06 crc kubenswrapper[4902]: I0121 14:52:06.617468 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kxwsm" Jan 21 14:52:08 crc kubenswrapper[4902]: I0121 14:52:08.796570 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:52:16 crc kubenswrapper[4902]: I0121 14:52:16.442397 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:52:16 crc kubenswrapper[4902]: I0121 14:52:16.470276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"swift-storage-0\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " pod="openstack/swift-storage-0" Jan 21 14:52:16 crc kubenswrapper[4902]: I0121 14:52:16.674548 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.607350 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.608138 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7zxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-ktqgj_openstack(eb5e91bc-7b75-4275-b1b6-998431981fca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.609828 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-ktqgj" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.723620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hmcs2" event={"ID":"9959d508-3783-403a-bdd6-65159821fc9e","Type":"ContainerDied","Data":"fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647"} Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.723656 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef5a480d26d112bdfd5701ee7922c211bab55d5f67520948d56b886a9288647" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.724963 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-s4np8" event={"ID":"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658","Type":"ContainerDied","Data":"b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b"} Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.724988 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ff547a0c27e98c4cfee82aa93881bfcf89d0b6e3e36720fced00cb9b5ac79b" Jan 21 14:52:18 crc kubenswrapper[4902]: E0121 14:52:18.726837 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-ktqgj" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.736173 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.746623 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936080 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936169 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936204 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936239 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936281 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936362 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936414 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936471 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936508 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") pod \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\" (UID: \"7b6eeccf-57f9-48ef-8b50-f1b3cdadd658\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.936577 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") pod \"9959d508-3783-403a-bdd6-65159821fc9e\" (UID: \"9959d508-3783-403a-bdd6-65159821fc9e\") " Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.942192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.942250 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run" (OuterVolumeSpecName: "var-run") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.942423 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.943344 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.943806 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts" (OuterVolumeSpecName: "scripts") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.944851 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.945377 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.951312 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.955204 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72" (OuterVolumeSpecName: "kube-api-access-fkh72") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "kube-api-access-fkh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.967235 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts" (OuterVolumeSpecName: "scripts") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.967358 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k" (OuterVolumeSpecName: "kube-api-access-6254k") pod "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" (UID: "7b6eeccf-57f9-48ef-8b50-f1b3cdadd658"). InnerVolumeSpecName "kube-api-access-6254k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.977198 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:18 crc kubenswrapper[4902]: I0121 14:52:18.978276 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9959d508-3783-403a-bdd6-65159821fc9e" (UID: "9959d508-3783-403a-bdd6-65159821fc9e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041336 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9959d508-3783-403a-bdd6-65159821fc9e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041713 4902 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041731 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041742 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkh72\" (UniqueName: \"kubernetes.io/projected/9959d508-3783-403a-bdd6-65159821fc9e-kube-api-access-fkh72\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041754 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041764 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041776 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6254k\" (UniqueName: \"kubernetes.io/projected/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-kube-api-access-6254k\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041788 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041798 4902 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041808 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9959d508-3783-403a-bdd6-65159821fc9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041819 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041830 4902 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.041840 4902 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9959d508-3783-403a-bdd6-65159821fc9e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.060370 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:52:19 crc kubenswrapper[4902]: W0121 14:52:19.060852 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0110fe_ef40_4a4b_bad7_a3c24aa5089a.slice/crio-2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d WatchSource:0}: Error finding container 2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d: Status 404 returned error can't find the container with id 2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d Jan 21 14:52:19 crc kubenswrapper[4902]: I0121 14:52:19.221692 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:52:19 crc kubenswrapper[4902]: W0121 14:52:19.221782 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee214fec_083a_4abd_b65e_003bccee24fa.slice/crio-6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d WatchSource:0}: Error finding container 6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d: Status 404 returned error can't find the container with id 6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.039292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.056425 4902 generic.go:334] "Generic (PLEG): container finished" podID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerID="2afbcb861df82627e26ab173626f1c8e32c7418b9f0cebb9c30b8e8a773fee20" exitCode=0 Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.056524 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gvjmj" event={"ID":"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a","Type":"ContainerDied","Data":"2afbcb861df82627e26ab173626f1c8e32c7418b9f0cebb9c30b8e8a773fee20"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.056591 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gvjmj" event={"ID":"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a","Type":"ContainerStarted","Data":"2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.063000 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hmcs2" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.063622 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerStarted","Data":"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852"} Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.064392 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-s4np8" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.064570 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.112389 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371952.742416 podStartE2EDuration="1m24.112358895s" podCreationTimestamp="2026-01-21 14:50:56 +0000 UTC" firstStartedPulling="2026-01-21 14:50:58.272083197 +0000 UTC m=+1020.348916216" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:20.105011627 +0000 UTC m=+1102.181844656" watchObservedRunningTime="2026-01-21 14:52:20.112358895 +0000 UTC m=+1102.189191944" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.126084 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.500739 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxwsm-config-s4np8"] Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.513648 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:20 crc kubenswrapper[4902]: E0121 14:52:20.514029 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959d508-3783-403a-bdd6-65159821fc9e" containerName="swift-ring-rebalance" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514059 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959d508-3783-403a-bdd6-65159821fc9e" containerName="swift-ring-rebalance" Jan 21 14:52:20 crc kubenswrapper[4902]: E0121 14:52:20.514078 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerName="ovn-config" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514084 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerName="ovn-config" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514406 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" containerName="ovn-config" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514427 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9959d508-3783-403a-bdd6-65159821fc9e" containerName="swift-ring-rebalance" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.514917 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.518388 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.522586 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588437 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588870 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.588887 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.589128 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690391 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690711 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690748 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690975 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.690991 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691060 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691078 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691131 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.691784 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.693114 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.719262 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"ovn-controller-kxwsm-config-6v9dp\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:20 crc kubenswrapper[4902]: I0121 14:52:20.841151 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.235595 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.549451 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.719594 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") pod \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.720029 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") pod \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\" (UID: \"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a\") " Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.947097 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" (UID: "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.948259 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:21 crc kubenswrapper[4902]: I0121 14:52:21.973532 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587" (OuterVolumeSpecName: "kube-api-access-m8587") pod "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" (UID: "dd0110fe-ef40-4a4b-bad7-a3c24aa5089a"). InnerVolumeSpecName "kube-api-access-m8587". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.053965 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8587\" (UniqueName: \"kubernetes.io/projected/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a-kube-api-access-m8587\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.088995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.089054 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.089067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.090216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gvjmj" event={"ID":"dd0110fe-ef40-4a4b-bad7-a3c24aa5089a","Type":"ContainerDied","Data":"2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.090251 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0fa7fabd294929921326289d90686b7bc4b9b5e3e7bc17970170db45a3367d" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.090311 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gvjmj" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.095994 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerStarted","Data":"493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.096057 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerStarted","Data":"6c871d02c54f921f78654feadccf4922a73121fd0476fec35c7d6d749146bf27"} Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.117512 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kxwsm-config-6v9dp" podStartSLOduration=2.117495784 podStartE2EDuration="2.117495784s" podCreationTimestamp="2026-01-21 14:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:22.114760247 +0000 UTC m=+1104.191593286" watchObservedRunningTime="2026-01-21 14:52:22.117495784 +0000 UTC m=+1104.194328813" Jan 21 14:52:22 crc kubenswrapper[4902]: I0121 14:52:22.808002 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6eeccf-57f9-48ef-8b50-f1b3cdadd658" path="/var/lib/kubelet/pods/7b6eeccf-57f9-48ef-8b50-f1b3cdadd658/volumes" Jan 21 14:52:23 crc kubenswrapper[4902]: I0121 14:52:23.105136 4902 generic.go:334] "Generic (PLEG): container finished" podID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerID="493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b" exitCode=0 Jan 21 14:52:23 crc kubenswrapper[4902]: I0121 14:52:23.105206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerDied","Data":"493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b"} Jan 21 14:52:23 crc kubenswrapper[4902]: I0121 14:52:23.107971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad"} Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.446226 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.529854 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.529949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530000 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530156 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530209 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530345 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") pod \"63ce75de-3f15-43b4-96c9-70c0b03f9280\" (UID: \"63ce75de-3f15-43b4-96c9-70c0b03f9280\") " Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530345 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run" (OuterVolumeSpecName: "var-run") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.530978 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531009 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531024 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63ce75de-3f15-43b4-96c9-70c0b03f9280-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.531391 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts" (OuterVolumeSpecName: "scripts") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.536186 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg" (OuterVolumeSpecName: "kube-api-access-cjdcg") pod "63ce75de-3f15-43b4-96c9-70c0b03f9280" (UID: "63ce75de-3f15-43b4-96c9-70c0b03f9280"). InnerVolumeSpecName "kube-api-access-cjdcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.633173 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjdcg\" (UniqueName: \"kubernetes.io/projected/63ce75de-3f15-43b4-96c9-70c0b03f9280-kube-api-access-cjdcg\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.633226 4902 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:25 crc kubenswrapper[4902]: I0121 14:52:25.633245 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63ce75de-3f15-43b4-96c9-70c0b03f9280-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.315690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm-config-6v9dp" event={"ID":"63ce75de-3f15-43b4-96c9-70c0b03f9280","Type":"ContainerDied","Data":"6c871d02c54f921f78654feadccf4922a73121fd0476fec35c7d6d749146bf27"} Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.315730 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c871d02c54f921f78654feadccf4922a73121fd0476fec35c7d6d749146bf27" Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.315766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm-config-6v9dp" Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.521331 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:26 crc kubenswrapper[4902]: I0121 14:52:26.529486 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxwsm-config-6v9dp"] Jan 21 14:52:28 crc kubenswrapper[4902]: I0121 14:52:28.310456 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" path="/var/lib/kubelet/pods/63ce75de-3f15-43b4-96c9-70c0b03f9280/volumes" Jan 21 14:52:29 crc kubenswrapper[4902]: I0121 14:52:29.395017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157"} Jan 21 14:52:30 crc kubenswrapper[4902]: I0121 14:52:30.406208 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a"} Jan 21 14:52:31 crc kubenswrapper[4902]: I0121 14:52:31.418706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e"} Jan 21 14:52:31 crc kubenswrapper[4902]: I0121 14:52:31.419187 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606"} Jan 21 14:52:32 crc kubenswrapper[4902]: I0121 14:52:32.448316 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9"} Jan 21 14:52:32 crc kubenswrapper[4902]: I0121 14:52:32.448779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.458442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerStarted","Data":"dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469663 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469737 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.469749 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerStarted","Data":"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a"} Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.491472 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ktqgj" podStartSLOduration=2.8625412409999997 podStartE2EDuration="36.491456555s" podCreationTimestamp="2026-01-21 14:51:57 +0000 UTC" firstStartedPulling="2026-01-21 14:51:58.085767847 +0000 UTC m=+1080.162600876" lastFinishedPulling="2026-01-21 14:52:31.714683161 +0000 UTC m=+1113.791516190" observedRunningTime="2026-01-21 14:52:33.482672196 +0000 UTC m=+1115.559505225" watchObservedRunningTime="2026-01-21 14:52:33.491456555 +0000 UTC m=+1115.568289584" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.521891 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.767819182 podStartE2EDuration="50.521863585s" podCreationTimestamp="2026-01-21 14:51:43 +0000 UTC" firstStartedPulling="2026-01-21 14:52:19.224184716 +0000 UTC m=+1101.301017745" lastFinishedPulling="2026-01-21 14:52:31.978229119 +0000 UTC m=+1114.055062148" observedRunningTime="2026-01-21 14:52:33.51108307 +0000 UTC m=+1115.587916109" watchObservedRunningTime="2026-01-21 14:52:33.521863585 +0000 UTC m=+1115.598696624" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.771398 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:52:33 crc kubenswrapper[4902]: E0121 14:52:33.771771 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerName="mariadb-account-create-update" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.771790 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerName="mariadb-account-create-update" Jan 21 14:52:33 crc kubenswrapper[4902]: E0121 14:52:33.771816 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerName="ovn-config" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.771825 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerName="ovn-config" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.772037 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" containerName="mariadb-account-create-update" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.772085 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ce75de-3f15-43b4-96c9-70c0b03f9280" containerName="ovn-config" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.773120 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.774951 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.798740 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.893977 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894134 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894181 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.894268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995365 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995455 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995481 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.995537 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.996492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.997616 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.998278 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.998895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:33 crc kubenswrapper[4902]: I0121 14:52:33.999661 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:34 crc kubenswrapper[4902]: I0121 14:52:34.019306 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"dnsmasq-dns-8db84466c-ns7jh\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:34 crc kubenswrapper[4902]: I0121 14:52:34.096428 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:34 crc kubenswrapper[4902]: I0121 14:52:34.550501 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:52:34 crc kubenswrapper[4902]: W0121 14:52:34.561310 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cfdec8c_8d41_4ae4_ad01_a4b76f589140.slice/crio-f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363 WatchSource:0}: Error finding container f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363: Status 404 returned error can't find the container with id f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363 Jan 21 14:52:35 crc kubenswrapper[4902]: I0121 14:52:35.489814 4902 generic.go:334] "Generic (PLEG): container finished" podID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerID="085cc064e188fc067c109385def65abea0a69b47e1fe8f6dadc55d4ea12c4007" exitCode=0 Jan 21 14:52:35 crc kubenswrapper[4902]: I0121 14:52:35.489885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerDied","Data":"085cc064e188fc067c109385def65abea0a69b47e1fe8f6dadc55d4ea12c4007"} Jan 21 14:52:35 crc kubenswrapper[4902]: I0121 14:52:35.490126 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerStarted","Data":"f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363"} Jan 21 14:52:36 crc kubenswrapper[4902]: I0121 14:52:36.500392 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerStarted","Data":"db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f"} Jan 21 14:52:36 crc kubenswrapper[4902]: I0121 14:52:36.500839 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:36 crc kubenswrapper[4902]: I0121 14:52:36.521429 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podStartSLOduration=3.521411357 podStartE2EDuration="3.521411357s" podCreationTimestamp="2026-01-21 14:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:36.517450515 +0000 UTC m=+1118.594283574" watchObservedRunningTime="2026-01-21 14:52:36.521411357 +0000 UTC m=+1118.598244386" Jan 21 14:52:37 crc kubenswrapper[4902]: I0121 14:52:37.799312 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.110601 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.111874 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.122659 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.221685 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.224609 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.233878 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.275932 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.276011 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.323257 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.324531 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.326614 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.337781 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378425 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.378807 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.379527 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.424887 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"cinder-db-create-4czjl\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.467242 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.479183 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.482719 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.490525 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.491690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.491773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.491974 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.492126 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.492975 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.497449 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.532983 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"barbican-db-create-np7hz\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.553935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.612919 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.613012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.613148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.613173 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.614130 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.641438 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.642910 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.646700 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.648034 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.657113 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.657719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.657849 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.667257 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.676034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.695229 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.698316 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"barbican-9bb1-account-create-update-f5hbr\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.715971 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716063 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716096 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716150 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716171 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716191 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.716918 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.751675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"cinder-7226-account-create-update-krlk5\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.798608 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.799979 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.807985 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819274 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819334 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819359 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.819392 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.820780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.824542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.825363 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.838070 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.847408 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"keystone-db-sync-8n66z\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.853122 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"neutron-db-create-nxvvs\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.920520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.920632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.921709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.939568 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.939838 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"neutron-457b-account-create-update-2trwh\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:38 crc kubenswrapper[4902]: I0121 14:52:38.958532 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.037289 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.049750 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.134902 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.153052 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.166977 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.415002 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.539881 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.551525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerStarted","Data":"3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.551578 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerStarted","Data":"de1ce05294c31ea3e7485201a8c03eb422a890c761c80f00cfdf53c008a3097c"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.553183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerStarted","Data":"af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.553222 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerStarted","Data":"e51262b04bb5a162fdf6ca544ef19f5ab4091e99cb9d8ee72320234ca0e42e90"} Jan 21 14:52:39 crc kubenswrapper[4902]: W0121 14:52:39.555928 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod095a6aec_1aa5_4754_818a_bbe7eedad9f2.slice/crio-be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925 WatchSource:0}: Error finding container be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925: Status 404 returned error can't find the container with id be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925 Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.556060 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-f5hbr" event={"ID":"d0fa0e74-137e-4ff6-9610-37b9ebe612c9","Type":"ContainerStarted","Data":"21de5eaae6221d07dff1d905aed3b34121b811bb09237e945729567417f596af"} Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.563734 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.587840 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-np7hz" podStartSLOduration=1.587826001 podStartE2EDuration="1.587826001s" podCreationTimestamp="2026-01-21 14:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:39.584104716 +0000 UTC m=+1121.660937745" watchObservedRunningTime="2026-01-21 14:52:39.587826001 +0000 UTC m=+1121.664659030" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.592858 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4czjl" podStartSLOduration=1.592842133 podStartE2EDuration="1.592842133s" podCreationTimestamp="2026-01-21 14:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:39.569446671 +0000 UTC m=+1121.646279700" watchObservedRunningTime="2026-01-21 14:52:39.592842133 +0000 UTC m=+1121.669675162" Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.780675 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:52:39 crc kubenswrapper[4902]: I0121 14:52:39.791424 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.567603 4902 generic.go:334] "Generic (PLEG): container finished" podID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerID="cff876825001ee2c7fa7f8bdbe379da8527d1a33b467f10b305adc0a8747aa98" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.567670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-krlk5" event={"ID":"e7eab019-1ec9-4109-93f8-2f3caa1fa508","Type":"ContainerDied","Data":"cff876825001ee2c7fa7f8bdbe379da8527d1a33b467f10b305adc0a8747aa98"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.567945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-krlk5" event={"ID":"e7eab019-1ec9-4109-93f8-2f3caa1fa508","Type":"ContainerStarted","Data":"fbbc8fb531dd195dba5a0e18a68911ad9d163e963b3fc357dfbd7a5adc3a9c2a"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.570242 4902 generic.go:334] "Generic (PLEG): container finished" podID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerID="af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.570309 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerDied","Data":"af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.572955 4902 generic.go:334] "Generic (PLEG): container finished" podID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerID="689584950e8fe70d3a520e19880e648a9cfc4e1dba5d9cf1c7c92f94555adda3" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.573115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-457b-account-create-update-2trwh" event={"ID":"3b401edf-e2ca-4abb-adb7-008ce32403b1","Type":"ContainerDied","Data":"689584950e8fe70d3a520e19880e648a9cfc4e1dba5d9cf1c7c92f94555adda3"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.573148 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-457b-account-create-update-2trwh" event={"ID":"3b401edf-e2ca-4abb-adb7-008ce32403b1","Type":"ContainerStarted","Data":"77d78f0cbe1513d2498b1175b95c511590a6e28042d8b44bfa705339d76861da"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.575476 4902 generic.go:334] "Generic (PLEG): container finished" podID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerID="b979d6e79dba97b3f526cfab4506aea68e0143adfc4356de611547f4493bec9f" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.575531 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-f5hbr" event={"ID":"d0fa0e74-137e-4ff6-9610-37b9ebe612c9","Type":"ContainerDied","Data":"b979d6e79dba97b3f526cfab4506aea68e0143adfc4356de611547f4493bec9f"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.578868 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerID="3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.578971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerDied","Data":"3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.587923 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerStarted","Data":"587efae09cf4dc3391097e9809b54ed4301ac51dc9f5bbcbd21ecf03e068c9d6"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.592005 4902 generic.go:334] "Generic (PLEG): container finished" podID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerID="277691b4cd995bb05532afffdba1de6a3149dc7dc1e0f0e9ce9ba32058b05cf6" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.592091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nxvvs" event={"ID":"095a6aec-1aa5-4754-818a-bbe7eedad9f2","Type":"ContainerDied","Data":"277691b4cd995bb05532afffdba1de6a3149dc7dc1e0f0e9ce9ba32058b05cf6"} Jan 21 14:52:40 crc kubenswrapper[4902]: I0121 14:52:40.592125 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nxvvs" event={"ID":"095a6aec-1aa5-4754-818a-bbe7eedad9f2","Type":"ContainerStarted","Data":"be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.018656 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.099495 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") pod \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.141960 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27" (OuterVolumeSpecName: "kube-api-access-jtj27") pod "f5dd3ace-42a8-4c8e-8531-0c04f145a002" (UID: "f5dd3ace-42a8-4c8e-8531-0c04f145a002"). InnerVolumeSpecName "kube-api-access-jtj27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.202294 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") pod \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\" (UID: \"f5dd3ace-42a8-4c8e-8531-0c04f145a002\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.203031 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtj27\" (UniqueName: \"kubernetes.io/projected/f5dd3ace-42a8-4c8e-8531-0c04f145a002-kube-api-access-jtj27\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.203699 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5dd3ace-42a8-4c8e-8531-0c04f145a002" (UID: "f5dd3ace-42a8-4c8e-8531-0c04f145a002"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.233198 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.235842 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.238356 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.244146 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.255882 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.304315 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5dd3ace-42a8-4c8e-8531-0c04f145a002-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405048 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") pod \"3b401edf-e2ca-4abb-adb7-008ce32403b1\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") pod \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405129 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") pod \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405145 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") pod \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405222 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") pod \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\" (UID: \"095a6aec-1aa5-4754-818a-bbe7eedad9f2\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.405849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "095a6aec-1aa5-4754-818a-bbe7eedad9f2" (UID: "095a6aec-1aa5-4754-818a-bbe7eedad9f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406170 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") pod \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\" (UID: \"4c6d6225-3f7d-485d-a384-5f0e53c3055d\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406209 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") pod \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\" (UID: \"e7eab019-1ec9-4109-93f8-2f3caa1fa508\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406235 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") pod \"3b401edf-e2ca-4abb-adb7-008ce32403b1\" (UID: \"3b401edf-e2ca-4abb-adb7-008ce32403b1\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406287 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") pod \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406364 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") pod \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\" (UID: \"d0fa0e74-137e-4ff6-9610-37b9ebe612c9\") " Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406562 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c6d6225-3f7d-485d-a384-5f0e53c3055d" (UID: "4c6d6225-3f7d-485d-a384-5f0e53c3055d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406590 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7eab019-1ec9-4109-93f8-2f3caa1fa508" (UID: "e7eab019-1ec9-4109-93f8-2f3caa1fa508"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406634 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b401edf-e2ca-4abb-adb7-008ce32403b1" (UID: "3b401edf-e2ca-4abb-adb7-008ce32403b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.406798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0fa0e74-137e-4ff6-9610-37b9ebe612c9" (UID: "d0fa0e74-137e-4ff6-9610-37b9ebe612c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408078 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c6d6225-3f7d-485d-a384-5f0e53c3055d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408101 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7eab019-1ec9-4109-93f8-2f3caa1fa508-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408115 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b401edf-e2ca-4abb-adb7-008ce32403b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408126 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.408138 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/095a6aec-1aa5-4754-818a-bbe7eedad9f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412236 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh" (OuterVolumeSpecName: "kube-api-access-dngvh") pod "e7eab019-1ec9-4109-93f8-2f3caa1fa508" (UID: "e7eab019-1ec9-4109-93f8-2f3caa1fa508"). InnerVolumeSpecName "kube-api-access-dngvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412272 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc" (OuterVolumeSpecName: "kube-api-access-b9pbc") pod "4c6d6225-3f7d-485d-a384-5f0e53c3055d" (UID: "4c6d6225-3f7d-485d-a384-5f0e53c3055d"). InnerVolumeSpecName "kube-api-access-b9pbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412290 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg" (OuterVolumeSpecName: "kube-api-access-hmsqg") pod "d0fa0e74-137e-4ff6-9610-37b9ebe612c9" (UID: "d0fa0e74-137e-4ff6-9610-37b9ebe612c9"). InnerVolumeSpecName "kube-api-access-hmsqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412305 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx" (OuterVolumeSpecName: "kube-api-access-xnpwx") pod "3b401edf-e2ca-4abb-adb7-008ce32403b1" (UID: "3b401edf-e2ca-4abb-adb7-008ce32403b1"). InnerVolumeSpecName "kube-api-access-xnpwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.412319 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc" (OuterVolumeSpecName: "kube-api-access-v2hvc") pod "095a6aec-1aa5-4754-818a-bbe7eedad9f2" (UID: "095a6aec-1aa5-4754-818a-bbe7eedad9f2"). InnerVolumeSpecName "kube-api-access-v2hvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.510770 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmsqg\" (UniqueName: \"kubernetes.io/projected/d0fa0e74-137e-4ff6-9610-37b9ebe612c9-kube-api-access-hmsqg\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511025 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnpwx\" (UniqueName: \"kubernetes.io/projected/3b401edf-e2ca-4abb-adb7-008ce32403b1-kube-api-access-xnpwx\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511034 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9pbc\" (UniqueName: \"kubernetes.io/projected/4c6d6225-3f7d-485d-a384-5f0e53c3055d-kube-api-access-b9pbc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511057 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngvh\" (UniqueName: \"kubernetes.io/projected/e7eab019-1ec9-4109-93f8-2f3caa1fa508-kube-api-access-dngvh\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.511070 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hvc\" (UniqueName: \"kubernetes.io/projected/095a6aec-1aa5-4754-818a-bbe7eedad9f2-kube-api-access-v2hvc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.611494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4czjl" event={"ID":"f5dd3ace-42a8-4c8e-8531-0c04f145a002","Type":"ContainerDied","Data":"de1ce05294c31ea3e7485201a8c03eb422a890c761c80f00cfdf53c008a3097c"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.611541 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1ce05294c31ea3e7485201a8c03eb422a890c761c80f00cfdf53c008a3097c" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.611612 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4czjl" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.628363 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nxvvs" event={"ID":"095a6aec-1aa5-4754-818a-bbe7eedad9f2","Type":"ContainerDied","Data":"be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.628451 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be00237b138fbb3d6818c304c7c9f421c4adcf71270cef33add7a86999a6d925" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.628508 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nxvvs" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.632309 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-krlk5" event={"ID":"e7eab019-1ec9-4109-93f8-2f3caa1fa508","Type":"ContainerDied","Data":"fbbc8fb531dd195dba5a0e18a68911ad9d163e963b3fc357dfbd7a5adc3a9c2a"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.632346 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbbc8fb531dd195dba5a0e18a68911ad9d163e963b3fc357dfbd7a5adc3a9c2a" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.632384 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-krlk5" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.634426 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-np7hz" event={"ID":"4c6d6225-3f7d-485d-a384-5f0e53c3055d","Type":"ContainerDied","Data":"e51262b04bb5a162fdf6ca544ef19f5ab4091e99cb9d8ee72320234ca0e42e90"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.634450 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e51262b04bb5a162fdf6ca544ef19f5ab4091e99cb9d8ee72320234ca0e42e90" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.634467 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-np7hz" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.639204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-457b-account-create-update-2trwh" event={"ID":"3b401edf-e2ca-4abb-adb7-008ce32403b1","Type":"ContainerDied","Data":"77d78f0cbe1513d2498b1175b95c511590a6e28042d8b44bfa705339d76861da"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.639223 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-457b-account-create-update-2trwh" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.639236 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d78f0cbe1513d2498b1175b95c511590a6e28042d8b44bfa705339d76861da" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.644525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-f5hbr" event={"ID":"d0fa0e74-137e-4ff6-9610-37b9ebe612c9","Type":"ContainerDied","Data":"21de5eaae6221d07dff1d905aed3b34121b811bb09237e945729567417f596af"} Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.644582 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21de5eaae6221d07dff1d905aed3b34121b811bb09237e945729567417f596af" Jan 21 14:52:42 crc kubenswrapper[4902]: I0121 14:52:42.644670 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-f5hbr" Jan 21 14:52:43 crc kubenswrapper[4902]: I0121 14:52:43.655402 4902 generic.go:334] "Generic (PLEG): container finished" podID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerID="dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47" exitCode=0 Jan 21 14:52:43 crc kubenswrapper[4902]: I0121 14:52:43.655447 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerDied","Data":"dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47"} Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.098227 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.232477 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.233004 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" containerID="cri-o://2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda" gracePeriod=10 Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.666955 4902 generic.go:334] "Generic (PLEG): container finished" podID="f26a414c-0df3-4829-ad7a-c444b795160a" containerID="2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda" exitCode=0 Jan 21 14:52:44 crc kubenswrapper[4902]: I0121 14:52:44.667033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerDied","Data":"2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda"} Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.508529 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582416 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582515 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.582589 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") pod \"eb5e91bc-7b75-4275-b1b6-998431981fca\" (UID: \"eb5e91bc-7b75-4275-b1b6-998431981fca\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.603954 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.604096 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd" (OuterVolumeSpecName: "kube-api-access-r7zxd") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "kube-api-access-r7zxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.613981 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.645116 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data" (OuterVolumeSpecName: "config-data") pod "eb5e91bc-7b75-4275-b1b6-998431981fca" (UID: "eb5e91bc-7b75-4275-b1b6-998431981fca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.679414 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.687711 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ktqgj" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.700186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ktqgj" event={"ID":"eb5e91bc-7b75-4275-b1b6-998431981fca","Type":"ContainerDied","Data":"06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b"} Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.700262 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06bc6ebdb802a8f9e6cf31504f046445f838734188e18dd997d7eac178a9c70b" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701782 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zxd\" (UniqueName: \"kubernetes.io/projected/eb5e91bc-7b75-4275-b1b6-998431981fca-kube-api-access-r7zxd\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701815 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701828 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.701846 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb5e91bc-7b75-4275-b1b6-998431981fca-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.802810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803326 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.803953 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") pod \"f26a414c-0df3-4829-ad7a-c444b795160a\" (UID: \"f26a414c-0df3-4829-ad7a-c444b795160a\") " Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.809162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk" (OuterVolumeSpecName: "kube-api-access-pfprk") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "kube-api-access-pfprk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.845552 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.846818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config" (OuterVolumeSpecName: "config") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.847137 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.850017 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f26a414c-0df3-4829-ad7a-c444b795160a" (UID: "f26a414c-0df3-4829-ad7a-c444b795160a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906103 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906157 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906174 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfprk\" (UniqueName: \"kubernetes.io/projected/f26a414c-0df3-4829-ad7a-c444b795160a-kube-api-access-pfprk\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906185 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:45 crc kubenswrapper[4902]: I0121 14:52:45.906195 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a414c-0df3-4829-ad7a-c444b795160a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050523 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="init" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050947 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="init" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050959 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050965 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050980 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.050986 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.050997 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051003 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051012 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051017 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051030 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerName="glance-db-sync" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051035 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerName="glance-db-sync" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051058 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051064 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051080 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051086 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: E0121 14:52:46.051092 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051098 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051296 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051322 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051335 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051348 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" containerName="dnsmasq-dns" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051357 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" containerName="mariadb-database-create" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051366 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051381 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" containerName="glance-db-sync" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.051391 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" containerName="mariadb-account-create-update" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.052585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.073398 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.111917 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112030 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112062 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112079 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.112102 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213693 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213708 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.213746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214509 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214618 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.214901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.233157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"dnsmasq-dns-74dfc89d77-m98rq\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.370102 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.707990 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerStarted","Data":"d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e"} Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.711481 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" event={"ID":"f26a414c-0df3-4829-ad7a-c444b795160a","Type":"ContainerDied","Data":"f2e59dbbb8c6adb99cbeb35911a1b6de41741dd0dd7508b3dc32a7f75a4ed19c"} Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.711530 4902 scope.go:117] "RemoveContainer" containerID="2835e971956ba6f6b6ef4af53fbc776463dd7dc5cf9fe6d1cb87ca296d232dda" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.711547 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-xglm5" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.745632 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8n66z" podStartSLOduration=3.075403622 podStartE2EDuration="8.745614376s" podCreationTimestamp="2026-01-21 14:52:38 +0000 UTC" firstStartedPulling="2026-01-21 14:52:39.844922867 +0000 UTC m=+1121.921755896" lastFinishedPulling="2026-01-21 14:52:45.515133621 +0000 UTC m=+1127.591966650" observedRunningTime="2026-01-21 14:52:46.735294674 +0000 UTC m=+1128.812127703" watchObservedRunningTime="2026-01-21 14:52:46.745614376 +0000 UTC m=+1128.822447405" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.756187 4902 scope.go:117] "RemoveContainer" containerID="e19fecd53265fa377cce915a6f9d5418debd0cc0619facc38c21547ed0d4b095" Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.759100 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.779078 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-xglm5"] Jan 21 14:52:46 crc kubenswrapper[4902]: I0121 14:52:46.880454 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:47 crc kubenswrapper[4902]: I0121 14:52:47.719730 4902 generic.go:334] "Generic (PLEG): container finished" podID="55109ced-875d-425c-bfca-9df867fdc7c8" containerID="be041a1eb36c6c6ae62d40b43bc1855f878c0bd015cf7aed44fdbbf69065c16c" exitCode=0 Jan 21 14:52:47 crc kubenswrapper[4902]: I0121 14:52:47.720002 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerDied","Data":"be041a1eb36c6c6ae62d40b43bc1855f878c0bd015cf7aed44fdbbf69065c16c"} Jan 21 14:52:47 crc kubenswrapper[4902]: I0121 14:52:47.720033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerStarted","Data":"54d97cf6f5d6e6f549a44efb5396488f668a2044b5853012678a53f9be6c8a9c"} Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.307175 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26a414c-0df3-4829-ad7a-c444b795160a" path="/var/lib/kubelet/pods/f26a414c-0df3-4829-ad7a-c444b795160a/volumes" Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.732884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerStarted","Data":"ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255"} Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.733056 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:48 crc kubenswrapper[4902]: I0121 14:52:48.758903 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" podStartSLOduration=3.758882874 podStartE2EDuration="3.758882874s" podCreationTimestamp="2026-01-21 14:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:48.755626211 +0000 UTC m=+1130.832459250" watchObservedRunningTime="2026-01-21 14:52:48.758882874 +0000 UTC m=+1130.835715903" Jan 21 14:52:49 crc kubenswrapper[4902]: I0121 14:52:49.742405 4902 generic.go:334] "Generic (PLEG): container finished" podID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerID="d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e" exitCode=0 Jan 21 14:52:49 crc kubenswrapper[4902]: I0121 14:52:49.742456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerDied","Data":"d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e"} Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.116130 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.204516 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") pod \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.204952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") pod \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.205070 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") pod \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\" (UID: \"9bb9c4d9-a042-4a60-adca-03be4d8ec42d\") " Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.209381 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j" (OuterVolumeSpecName: "kube-api-access-qvf4j") pod "9bb9c4d9-a042-4a60-adca-03be4d8ec42d" (UID: "9bb9c4d9-a042-4a60-adca-03be4d8ec42d"). InnerVolumeSpecName "kube-api-access-qvf4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.226199 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bb9c4d9-a042-4a60-adca-03be4d8ec42d" (UID: "9bb9c4d9-a042-4a60-adca-03be4d8ec42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.250133 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data" (OuterVolumeSpecName: "config-data") pod "9bb9c4d9-a042-4a60-adca-03be4d8ec42d" (UID: "9bb9c4d9-a042-4a60-adca-03be4d8ec42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.306521 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvf4j\" (UniqueName: \"kubernetes.io/projected/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-kube-api-access-qvf4j\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.306561 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.306574 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb9c4d9-a042-4a60-adca-03be4d8ec42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.757663 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8n66z" event={"ID":"9bb9c4d9-a042-4a60-adca-03be4d8ec42d","Type":"ContainerDied","Data":"587efae09cf4dc3391097e9809b54ed4301ac51dc9f5bbcbd21ecf03e068c9d6"} Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.757710 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587efae09cf4dc3391097e9809b54ed4301ac51dc9f5bbcbd21ecf03e068c9d6" Jan 21 14:52:51 crc kubenswrapper[4902]: I0121 14:52:51.757710 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8n66z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.108792 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.109008 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" containerID="cri-o://ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255" gracePeriod=10 Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.118309 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.161806 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:52:52 crc kubenswrapper[4902]: E0121 14:52:52.162185 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerName="keystone-db-sync" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.162201 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerName="keystone-db-sync" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.162363 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" containerName="keystone-db-sync" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.162920 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.164680 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.164883 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.164989 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.165164 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.168305 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185596 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185650 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185701 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185749 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.185766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.189115 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.190963 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.215480 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.229484 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300833 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300892 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300921 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300942 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.300989 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301009 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301145 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301190 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.301214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.324940 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.342817 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.349661 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.374384 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.374742 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.386238 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"keystone-bootstrap-dk26m\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.411955 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412208 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412318 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412385 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412446 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.412508 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.414305 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.414908 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.415261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.415533 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.416365 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.417792 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.419022 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.427162 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.428194 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.438774 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.444923 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.446821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.447009 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.447203 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wh7dk" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.447364 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.459488 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.477070 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.484424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"dnsmasq-dns-5fdbfbc95f-glhzf\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.507751 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.507976 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d52d6" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.508139 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513407 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513433 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513484 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513514 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513569 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513596 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513643 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513740 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513791 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.513813 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.543216 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.582585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.588741 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.614210 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615016 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615058 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615091 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615112 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615129 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615172 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615194 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615257 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615289 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615341 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.615376 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.627921 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.628065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.632317 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.632593 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.645862 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.656904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.657418 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.657852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.662476 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.662820 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.662955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.663132 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.665359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.698291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"neutron-db-sync-zlh54\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.706740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"cinder-db-sync-twg7k\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.715625 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"ceilometer-0\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.722248 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.723906 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.736890 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.749342 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-26vvq" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.753311 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.773057 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.787562 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.800111 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.823990 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824085 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824136 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.824176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.845167 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.846291 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.861894 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.863133 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lxg2q" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.863289 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.894921 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.900007 4902 generic.go:334] "Generic (PLEG): container finished" podID="55109ced-875d-425c-bfca-9df867fdc7c8" containerID="ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255" exitCode=0 Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.900072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerDied","Data":"ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255"} Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.925906 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.925969 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.925992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926043 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926078 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926094 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926125 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.926143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.934640 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.937570 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.941872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.956832 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.966226 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:52 crc kubenswrapper[4902]: I0121 14:52:52.991190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"placement-db-sync-b64dh\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.011966 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.013246 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.026953 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027543 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027688 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027856 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.027999 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.028083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.032462 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.036618 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.058604 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"barbican-db-sync-4ds4z\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.067241 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.087436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135034 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135087 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135117 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135166 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135207 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.135226 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136250 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.136709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.137110 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.155213 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"dnsmasq-dns-6f6f8cb849-xstzn\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.171977 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.201520 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240252 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240350 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240502 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240544 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240602 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.240635 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") pod \"55109ced-875d-425c-bfca-9df867fdc7c8\" (UID: \"55109ced-875d-425c-bfca-9df867fdc7c8\") " Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.251020 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl" (OuterVolumeSpecName: "kube-api-access-9t7cl") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "kube-api-access-9t7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.309318 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: E0121 14:52:53.309808 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.309824 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" Jan 21 14:52:53 crc kubenswrapper[4902]: E0121 14:52:53.309837 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="init" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.309845 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="init" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.310113 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" containerName="dnsmasq-dns" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.311144 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335207 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335539 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-n2trw" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335729 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.335886 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.375991 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.377446 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7cl\" (UniqueName: \"kubernetes.io/projected/55109ced-875d-425c-bfca-9df867fdc7c8-kube-api-access-9t7cl\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.382377 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.406092 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.441613 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.460832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.472755 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481210 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481405 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481563 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481697 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.481973 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.482097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.484468 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.484497 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.506921 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.514967 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config" (OuterVolumeSpecName: "config") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.518547 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.520972 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.530980 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.531025 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.578965 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.586810 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55109ced-875d-425c-bfca-9df867fdc7c8" (UID: "55109ced-875d-425c-bfca-9df867fdc7c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603429 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603516 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603641 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603669 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603698 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603859 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603936 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.603999 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604139 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604155 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.604168 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55109ced-875d-425c-bfca-9df867fdc7c8-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.609668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.610405 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.612495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.615020 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.615200 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.615564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.630439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.635310 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.646740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708265 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708339 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708423 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708478 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708555 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.708868 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.709274 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.709664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.714665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.719351 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.726896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.731761 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.732284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.737350 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.741408 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.866858 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.875508 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.880443 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.895149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.899582 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.910665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" event={"ID":"55109ced-875d-425c-bfca-9df867fdc7c8","Type":"ContainerDied","Data":"54d97cf6f5d6e6f549a44efb5396488f668a2044b5853012678a53f9be6c8a9c"} Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.910696 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-m98rq" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.910715 4902 scope.go:117] "RemoveContainer" containerID="ee373bfcc95337faf0d0a8d1d17928705fb09f163f6c59a2da58ed885e64a255" Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.912104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" event={"ID":"cb4097d7-2ce0-4a7c-b524-82d34c3d368c","Type":"ContainerStarted","Data":"97e640bda0bdcefdf4097a70f64afdf78317cfb56209fc082cd41b1db92af0f8"} Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.946128 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:53 crc kubenswrapper[4902]: I0121 14:52:53.953658 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-m98rq"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.050294 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.150625 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.176410 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2be2f88_2ef5_4773_a31c_a8acd6e27608.slice/crio-7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75 WatchSource:0}: Error finding container 7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75: Status 404 returned error can't find the container with id 7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75 Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.181825 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15ef0c45_4c21_4824_850e_545f66a2c20a.slice/crio-e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f WatchSource:0}: Error finding container e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f: Status 404 returned error can't find the container with id e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.207078 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdcec88e_b290_47a2_a111_f353528b337e.slice/crio-7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153 WatchSource:0}: Error finding container 7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153: Status 404 returned error can't find the container with id 7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.232277 4902 scope.go:117] "RemoveContainer" containerID="be041a1eb36c6c6ae62d40b43bc1855f878c0bd015cf7aed44fdbbf69065c16c" Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.331282 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55109ced-875d-425c-bfca-9df867fdc7c8" path="/var/lib/kubelet/pods/55109ced-875d-425c-bfca-9df867fdc7c8/volumes" Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.560953 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.636933 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.736979 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.872974 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.928884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"93b4216849acc7e83ad93b11dfedacb592b887f2e39ca3b5b2c28470072e2c3e"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.933385 4902 generic.go:334] "Generic (PLEG): container finished" podID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerID="3ff2eb1b3a6a4a80f1d99b80a1c9dfd1a67c9101a6c6bbf9236def171548312e" exitCode=0 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.933572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" event={"ID":"cb4097d7-2ce0-4a7c-b524-82d34c3d368c","Type":"ContainerDied","Data":"3ff2eb1b3a6a4a80f1d99b80a1c9dfd1a67c9101a6c6bbf9236def171548312e"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.943981 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerStarted","Data":"c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.944050 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerStarted","Data":"e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.949794 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerStarted","Data":"9f26212e4bdc5bda5416b6956048e081a79eb4fe056e9e364faed24f7ac4f14f"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.959117 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.966664 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerStarted","Data":"276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.966705 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerStarted","Data":"7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.982706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerStarted","Data":"7b40ba155df5f9ea3c66a7bdb479c5b6c0f2b6eda7d8e8f89404b65e212bd221"} Jan 21 14:52:54 crc kubenswrapper[4902]: W0121 14:52:54.987561 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cbc3227_2b2b_489c_bc35_2266eae99935.slice/crio-f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4 WatchSource:0}: Error finding container f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4: Status 404 returned error can't find the container with id f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.987868 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdcec88e-b290-47a2-a111-f353528b337e" containerID="e09162a3ec37680929590914b38193023c428285227f1464b2740e369fca6b12" exitCode=0 Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.987912 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerDied","Data":"e09162a3ec37680929590914b38193023c428285227f1464b2740e369fca6b12"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.987931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerStarted","Data":"7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.991432 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zlh54" podStartSLOduration=2.991409842 podStartE2EDuration="2.991409842s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:54.982842289 +0000 UTC m=+1137.059675318" watchObservedRunningTime="2026-01-21 14:52:54.991409842 +0000 UTC m=+1137.068242871" Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.992711 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerStarted","Data":"1240a2082e984db724460dca85452b351506f660a1b70f26c765e2a219ef66f2"} Jan 21 14:52:54 crc kubenswrapper[4902]: I0121 14:52:54.997837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerStarted","Data":"83424afb06205a5855d6b3c92c92324b00e4ab6828b9f7a1bf1115dc87d2cda6"} Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.005226 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dk26m" podStartSLOduration=3.005206272 podStartE2EDuration="3.005206272s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:55.003274838 +0000 UTC m=+1137.080107867" watchObservedRunningTime="2026-01-21 14:52:55.005206272 +0000 UTC m=+1137.082039301" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.416624 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567777 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567799 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567846 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.567893 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") pod \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\" (UID: \"cb4097d7-2ce0-4a7c-b524-82d34c3d368c\") " Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.585371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g" (OuterVolumeSpecName: "kube-api-access-hrt5g") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "kube-api-access-hrt5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.597919 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.600507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.624822 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.641716 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config" (OuterVolumeSpecName: "config") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.652408 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb4097d7-2ce0-4a7c-b524-82d34c3d368c" (UID: "cb4097d7-2ce0-4a7c-b524-82d34c3d368c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671022 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrt5g\" (UniqueName: \"kubernetes.io/projected/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-kube-api-access-hrt5g\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671079 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671090 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671097 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671106 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:55 crc kubenswrapper[4902]: I0121 14:52:55.671115 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4097d7-2ce0-4a7c-b524-82d34c3d368c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.061948 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" event={"ID":"cb4097d7-2ce0-4a7c-b524-82d34c3d368c","Type":"ContainerDied","Data":"97e640bda0bdcefdf4097a70f64afdf78317cfb56209fc082cd41b1db92af0f8"} Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.062011 4902 scope.go:117] "RemoveContainer" containerID="3ff2eb1b3a6a4a80f1d99b80a1c9dfd1a67c9101a6c6bbf9236def171548312e" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.062193 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-glhzf" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.069649 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerStarted","Data":"f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4"} Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.077172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerStarted","Data":"7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303"} Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.077231 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.103025 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" podStartSLOduration=4.103002152 podStartE2EDuration="4.103002152s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:56.101645773 +0000 UTC m=+1138.178478802" watchObservedRunningTime="2026-01-21 14:52:56.103002152 +0000 UTC m=+1138.179835181" Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.154870 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.157837 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-glhzf"] Jan 21 14:52:56 crc kubenswrapper[4902]: I0121 14:52:56.305862 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" path="/var/lib/kubelet/pods/cb4097d7-2ce0-4a7c-b524-82d34c3d368c/volumes" Jan 21 14:52:57 crc kubenswrapper[4902]: I0121 14:52:57.090541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerStarted","Data":"4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d"} Jan 21 14:52:57 crc kubenswrapper[4902]: I0121 14:52:57.098414 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerStarted","Data":"0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5"} Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.110416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerStarted","Data":"8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a"} Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.110517 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" containerID="cri-o://4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.110580 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" containerID="cri-o://8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.118401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerStarted","Data":"b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309"} Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.118594 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" containerID="cri-o://0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.118675 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" containerID="cri-o://b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309" gracePeriod=30 Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.139826 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.139805627 podStartE2EDuration="6.139805627s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:58.134814676 +0000 UTC m=+1140.211647705" watchObservedRunningTime="2026-01-21 14:52:58.139805627 +0000 UTC m=+1140.216638656" Jan 21 14:52:58 crc kubenswrapper[4902]: I0121 14:52:58.164719 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.164670161 podStartE2EDuration="6.164670161s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:58.162118078 +0000 UTC m=+1140.238951127" watchObservedRunningTime="2026-01-21 14:52:58.164670161 +0000 UTC m=+1140.241503190" Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152457 4902 generic.go:334] "Generic (PLEG): container finished" podID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerID="b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152501 4902 generic.go:334] "Generic (PLEG): container finished" podID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerID="0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerDied","Data":"b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309"} Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.152573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerDied","Data":"0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5"} Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155553 4902 generic.go:334] "Generic (PLEG): container finished" podID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerID="8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155608 4902 generic.go:334] "Generic (PLEG): container finished" podID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerID="4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d" exitCode=143 Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155651 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerDied","Data":"8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a"} Jan 21 14:52:59 crc kubenswrapper[4902]: I0121 14:52:59.155739 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerDied","Data":"4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d"} Jan 21 14:53:00 crc kubenswrapper[4902]: I0121 14:53:00.168063 4902 generic.go:334] "Generic (PLEG): container finished" podID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerID="276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c" exitCode=0 Jan 21 14:53:00 crc kubenswrapper[4902]: I0121 14:53:00.168098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerDied","Data":"276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c"} Jan 21 14:53:03 crc kubenswrapper[4902]: I0121 14:53:03.379023 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:53:03 crc kubenswrapper[4902]: I0121 14:53:03.447190 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:53:03 crc kubenswrapper[4902]: I0121 14:53:03.447485 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" containerID="cri-o://db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f" gracePeriod=10 Jan 21 14:53:04 crc kubenswrapper[4902]: I0121 14:53:04.161286 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 21 14:53:04 crc kubenswrapper[4902]: I0121 14:53:04.225624 4902 generic.go:334] "Generic (PLEG): container finished" podID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerID="db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f" exitCode=0 Jan 21 14:53:04 crc kubenswrapper[4902]: I0121 14:53:04.225681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerDied","Data":"db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f"} Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.243375 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dk26m" event={"ID":"e2be2f88-2ef5-4773-a31c-a8acd6e27608","Type":"ContainerDied","Data":"7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75"} Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.243993 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7544f3d00e92bd9decbbbaa4f539cd4161aa3325e13242445399b9b398495d75" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.245434 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7cbc3227-2b2b-489c-bc35-2266eae99935","Type":"ContainerDied","Data":"f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4"} Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.245460 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3dd33815c8a4853e7c5aeed751d1592a9c789599335b4e59224938b34e8c9a4" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.326395 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.326980 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.499965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502392 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502747 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502851 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502887 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.502976 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503093 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503219 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503233 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503494 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs" (OuterVolumeSpecName: "logs") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503677 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503762 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"7cbc3227-2b2b-489c-bc35-2266eae99935\" (UID: \"7cbc3227-2b2b-489c-bc35-2266eae99935\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.503925 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") pod \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\" (UID: \"e2be2f88-2ef5-4773-a31c-a8acd6e27608\") " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.504589 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.504607 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cbc3227-2b2b-489c-bc35-2266eae99935-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.507736 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts" (OuterVolumeSpecName: "scripts") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.507949 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.507973 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8" (OuterVolumeSpecName: "kube-api-access-sb6s8") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "kube-api-access-sb6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.508078 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts" (OuterVolumeSpecName: "scripts") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.509585 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.509743 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.517975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl" (OuterVolumeSpecName: "kube-api-access-dblzl") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "kube-api-access-dblzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.528676 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.531227 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.536157 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data" (OuterVolumeSpecName: "config-data") pod "e2be2f88-2ef5-4773-a31c-a8acd6e27608" (UID: "e2be2f88-2ef5-4773-a31c-a8acd6e27608"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.550115 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data" (OuterVolumeSpecName: "config-data") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.557608 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7cbc3227-2b2b-489c-bc35-2266eae99935" (UID: "7cbc3227-2b2b-489c-bc35-2266eae99935"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605880 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6s8\" (UniqueName: \"kubernetes.io/projected/7cbc3227-2b2b-489c-bc35-2266eae99935-kube-api-access-sb6s8\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605924 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605941 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605952 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.605991 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606005 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dblzl\" (UniqueName: \"kubernetes.io/projected/e2be2f88-2ef5-4773-a31c-a8acd6e27608-kube-api-access-dblzl\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606018 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606032 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606062 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606073 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cbc3227-2b2b-489c-bc35-2266eae99935-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606083 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.606095 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2be2f88-2ef5-4773-a31c-a8acd6e27608-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.628422 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:53:06 crc kubenswrapper[4902]: I0121 14:53:06.707506 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.254117 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dk26m" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.254192 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.297685 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.316224 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.331178 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.331850 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.331963 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.332064 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332154 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.332235 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerName="keystone-bootstrap" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332302 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerName="keystone-bootstrap" Jan 21 14:53:07 crc kubenswrapper[4902]: E0121 14:53:07.332398 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerName="init" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332469 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerName="init" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332731 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-log" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332822 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" containerName="keystone-bootstrap" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332901 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4097d7-2ce0-4a7c-b524-82d34c3d368c" containerName="init" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.332980 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" containerName="glance-httpd" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.334149 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.336688 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.336863 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.340855 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.524673 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525086 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525136 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.525321 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.546401 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.561002 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dk26m"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626597 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626691 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626758 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626780 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626817 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.626857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.631685 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.632015 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.632155 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.632349 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.633280 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.635528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.635640 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.635880 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.640583 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.641370 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.641907 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.641967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.642118 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.642173 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.645245 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.656929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.677535 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830226 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830289 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830313 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.830402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.931920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932031 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932088 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932152 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932186 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.932216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.938768 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.940950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.951275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.951882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.958457 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.959124 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:07 crc kubenswrapper[4902]: I0121 14:53:07.970741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"keystone-bootstrap-c6zzp\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:08 crc kubenswrapper[4902]: I0121 14:53:08.033931 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:08 crc kubenswrapper[4902]: I0121 14:53:08.316662 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cbc3227-2b2b-489c-bc35-2266eae99935" path="/var/lib/kubelet/pods/7cbc3227-2b2b-489c-bc35-2266eae99935/volumes" Jan 21 14:53:08 crc kubenswrapper[4902]: I0121 14:53:08.317679 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2be2f88-2ef5-4773-a31c-a8acd6e27608" path="/var/lib/kubelet/pods/e2be2f88-2ef5-4773-a31c-a8acd6e27608/volumes" Jan 21 14:53:14 crc kubenswrapper[4902]: I0121 14:53:14.097321 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.041981 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.042482 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tjb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4ds4z_openstack(df9277be-e557-4d2e-b799-8fc6def975b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.043996 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4ds4z" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.149913 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.156855 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.161862 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.161972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162088 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162120 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162157 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162240 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.162325 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\" (UID: \"55daf4a6-0e2d-4832-8740-87f628a6e2cc\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.163298 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.163338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs" (OuterVolumeSpecName: "logs") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.171418 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g" (OuterVolumeSpecName: "kube-api-access-v429g") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "kube-api-access-v429g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.186521 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.186564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts" (OuterVolumeSpecName: "scripts") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.206607 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.218888 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.251005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data" (OuterVolumeSpecName: "config-data") pod "55daf4a6-0e2d-4832-8740-87f628a6e2cc" (UID: "55daf4a6-0e2d-4832-8740-87f628a6e2cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.263862 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.263913 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.263963 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264524 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") pod \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\" (UID: \"9cfdec8c-8d41-4ae4-ad01-a4b76f589140\") " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264894 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v429g\" (UniqueName: \"kubernetes.io/projected/55daf4a6-0e2d-4832-8740-87f628a6e2cc-kube-api-access-v429g\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264907 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264915 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55daf4a6-0e2d-4832-8740-87f628a6e2cc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264932 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264941 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264949 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264956 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.264968 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55daf4a6-0e2d-4832-8740-87f628a6e2cc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.269664 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp" (OuterVolumeSpecName: "kube-api-access-xgdkp") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "kube-api-access-xgdkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.284730 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.312395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config" (OuterVolumeSpecName: "config") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.312755 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.313595 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.316667 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.327231 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9cfdec8c-8d41-4ae4-ad01-a4b76f589140" (UID: "9cfdec8c-8d41-4ae4-ad01-a4b76f589140"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366589 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366629 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366640 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdkp\" (UniqueName: \"kubernetes.io/projected/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-kube-api-access-xgdkp\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366651 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366659 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366670 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.366679 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cfdec8c-8d41-4ae4-ad01-a4b76f589140-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.430768 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55daf4a6-0e2d-4832-8740-87f628a6e2cc","Type":"ContainerDied","Data":"7b40ba155df5f9ea3c66a7bdb479c5b6c0f2b6eda7d8e8f89404b65e212bd221"} Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.430864 4902 scope.go:117] "RemoveContainer" containerID="b8ee32986cfb942a0b8084ffe5d7119b0f8740770e39cc90369ef4116159b309" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.431018 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.438199 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" event={"ID":"9cfdec8c-8d41-4ae4-ad01-a4b76f589140","Type":"ContainerDied","Data":"f37440f856bd371cff80f2f0f1e426de41d7fcfe1af9a8b2a61bd34561bbe363"} Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.438220 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.439494 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-4ds4z" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.482459 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.490406 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.506532 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.517569 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-ns7jh"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.538859 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539316 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539339 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539363 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539373 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539405 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="init" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539415 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="init" Jan 21 14:53:16 crc kubenswrapper[4902]: E0121 14:53:16.539431 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539438 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539651 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-httpd" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539682 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" containerName="glance-log" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.539694 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.540727 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.542814 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.543918 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.551782 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569627 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569644 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569696 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.569724 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670768 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670803 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670867 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670970 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.670993 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.671336 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.671945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.672072 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.675513 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.676438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.676669 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.685373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.689467 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.695926 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:16 crc kubenswrapper[4902]: I0121 14:53:16.867301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.511397 4902 scope.go:117] "RemoveContainer" containerID="0217c431d44df2f20d49bb80b38d3634c60d8d298432655fbcd6784934a969c5" Jan 21 14:53:17 crc kubenswrapper[4902]: E0121 14:53:17.522178 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 21 14:53:17 crc kubenswrapper[4902]: E0121 14:53:17.522391 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4xph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-twg7k_openstack(137b1040-d368-4b6d-a4db-ba7c626f666f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:53:17 crc kubenswrapper[4902]: E0121 14:53:17.523684 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-twg7k" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.673150 4902 scope.go:117] "RemoveContainer" containerID="db22298ae310fe9c4abed7194da190cb997c649d5ca02aff4d05d6c947c77a3f" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.725704 4902 scope.go:117] "RemoveContainer" containerID="085cc064e188fc067c109385def65abea0a69b47e1fe8f6dadc55d4ea12c4007" Jan 21 14:53:17 crc kubenswrapper[4902]: I0121 14:53:17.978998 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:53:17 crc kubenswrapper[4902]: W0121 14:53:17.980701 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966f492d_0f8f_4bef_b60f_777f25367104.slice/crio-3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a WatchSource:0}: Error finding container 3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a: Status 404 returned error can't find the container with id 3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.160563 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.260119 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:18 crc kubenswrapper[4902]: W0121 14:53:18.274549 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30ff158a_452e_4180_b99e_9a171035d794.slice/crio-fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce WatchSource:0}: Error finding container fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce: Status 404 returned error can't find the container with id fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.325885 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55daf4a6-0e2d-4832-8740-87f628a6e2cc" path="/var/lib/kubelet/pods/55daf4a6-0e2d-4832-8740-87f628a6e2cc/volumes" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.326778 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" path="/var/lib/kubelet/pods/9cfdec8c-8d41-4ae4-ad01-a4b76f589140/volumes" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.456068 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerStarted","Data":"b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.456655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerStarted","Data":"3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.460010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerStarted","Data":"8a331ab6e6d4779bd2c4ccae990c6e7b561e92f584c35ef4a58e44ff1375f620"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.465342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.468932 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerStarted","Data":"885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac"} Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.474829 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerStarted","Data":"fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce"} Jan 21 14:53:18 crc kubenswrapper[4902]: E0121 14:53:18.478692 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-twg7k" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.483757 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-c6zzp" podStartSLOduration=11.483733018 podStartE2EDuration="11.483733018s" podCreationTimestamp="2026-01-21 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:18.470891304 +0000 UTC m=+1160.547724333" watchObservedRunningTime="2026-01-21 14:53:18.483733018 +0000 UTC m=+1160.560566047" Jan 21 14:53:18 crc kubenswrapper[4902]: I0121 14:53:18.501003 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-b64dh" podStartSLOduration=3.23968357 podStartE2EDuration="26.500983506s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.198878863 +0000 UTC m=+1136.275711902" lastFinishedPulling="2026-01-21 14:53:17.460178799 +0000 UTC m=+1159.537011838" observedRunningTime="2026-01-21 14:53:18.492731612 +0000 UTC m=+1160.569564641" watchObservedRunningTime="2026-01-21 14:53:18.500983506 +0000 UTC m=+1160.577816535" Jan 21 14:53:19 crc kubenswrapper[4902]: I0121 14:53:19.098002 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-ns7jh" podUID="9cfdec8c-8d41-4ae4-ad01-a4b76f589140" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 14:53:19 crc kubenswrapper[4902]: I0121 14:53:19.483178 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerStarted","Data":"5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456"} Jan 21 14:53:19 crc kubenswrapper[4902]: I0121 14:53:19.485860 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerStarted","Data":"42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72"} Jan 21 14:53:20 crc kubenswrapper[4902]: I0121 14:53:20.494149 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerStarted","Data":"14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df"} Jan 21 14:53:28 crc kubenswrapper[4902]: I0121 14:53:28.569934 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerStarted","Data":"71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19"} Jan 21 14:53:28 crc kubenswrapper[4902]: I0121 14:53:28.595929 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.595910255 podStartE2EDuration="21.595910255s" podCreationTimestamp="2026-01-21 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:28.58829618 +0000 UTC m=+1170.665129209" watchObservedRunningTime="2026-01-21 14:53:28.595910255 +0000 UTC m=+1170.672743284" Jan 21 14:53:28 crc kubenswrapper[4902]: I0121 14:53:28.611184 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.611164957 podStartE2EDuration="12.611164957s" podCreationTimestamp="2026-01-21 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:28.608861302 +0000 UTC m=+1170.685694331" watchObservedRunningTime="2026-01-21 14:53:28.611164957 +0000 UTC m=+1170.687997986" Jan 21 14:53:31 crc kubenswrapper[4902]: I0121 14:53:31.599824 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerStarted","Data":"ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993"} Jan 21 14:53:31 crc kubenswrapper[4902]: I0121 14:53:31.601999 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d"} Jan 21 14:53:32 crc kubenswrapper[4902]: I0121 14:53:32.614309 4902 generic.go:334] "Generic (PLEG): container finished" podID="966f492d-0f8f-4bef-b60f-777f25367104" containerID="b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62" exitCode=0 Jan 21 14:53:32 crc kubenswrapper[4902]: I0121 14:53:32.614346 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerDied","Data":"b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62"} Jan 21 14:53:32 crc kubenswrapper[4902]: I0121 14:53:32.654612 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4ds4z" podStartSLOduration=3.521797194 podStartE2EDuration="40.654594741s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.211971664 +0000 UTC m=+1136.288804693" lastFinishedPulling="2026-01-21 14:53:31.344769221 +0000 UTC m=+1173.421602240" observedRunningTime="2026-01-21 14:53:32.65065108 +0000 UTC m=+1174.727484109" watchObservedRunningTime="2026-01-21 14:53:32.654594741 +0000 UTC m=+1174.731427770" Jan 21 14:53:33 crc kubenswrapper[4902]: I0121 14:53:33.622605 4902 generic.go:334] "Generic (PLEG): container finished" podID="83490157-abed-443f-8843-945bb43715af" containerID="885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac" exitCode=0 Jan 21 14:53:33 crc kubenswrapper[4902]: I0121 14:53:33.622689 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerDied","Data":"885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac"} Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.341432 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.425900 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.425998 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426121 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426170 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426201 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.426232 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") pod \"966f492d-0f8f-4bef-b60f-777f25367104\" (UID: \"966f492d-0f8f-4bef-b60f-777f25367104\") " Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.431969 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.432172 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.433368 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6" (OuterVolumeSpecName: "kube-api-access-6smc6") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "kube-api-access-6smc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.433495 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts" (OuterVolumeSpecName: "scripts") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.451193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.466080 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data" (OuterVolumeSpecName: "config-data") pod "966f492d-0f8f-4bef-b60f-777f25367104" (UID: "966f492d-0f8f-4bef-b60f-777f25367104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528249 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528288 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528301 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528311 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6smc6\" (UniqueName: \"kubernetes.io/projected/966f492d-0f8f-4bef-b60f-777f25367104-kube-api-access-6smc6\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528321 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.528329 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/966f492d-0f8f-4bef-b60f-777f25367104-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.642914 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c6zzp" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.642980 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c6zzp" event={"ID":"966f492d-0f8f-4bef-b60f-777f25367104","Type":"ContainerDied","Data":"3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a"} Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.643008 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3313ecc8a66d98cc624d87ed14dbd071277551fa0b6dc3f15d60fe60589dd37a" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.733323 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:53:34 crc kubenswrapper[4902]: E0121 14:53:34.733790 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966f492d-0f8f-4bef-b60f-777f25367104" containerName="keystone-bootstrap" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.733812 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="966f492d-0f8f-4bef-b60f-777f25367104" containerName="keystone-bootstrap" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.737225 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="966f492d-0f8f-4bef-b60f-777f25367104" containerName="keystone-bootstrap" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.737973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.744142 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.744814 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5z5b6" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.745082 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.745294 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.745434 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.746004 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.747464 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.842801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843122 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843222 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843276 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843418 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843484 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.843511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951066 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951156 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.951400 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.955790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.955952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.955950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.957733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.958944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.964313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.965029 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:34 crc kubenswrapper[4902]: I0121 14:53:34.987618 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"keystone-5684459db4-jgdkj\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:35 crc kubenswrapper[4902]: I0121 14:53:35.075935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.675318 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b64dh" event={"ID":"83490157-abed-443f-8843-945bb43715af","Type":"ContainerDied","Data":"9f26212e4bdc5bda5416b6956048e081a79eb4fe056e9e364faed24f7ac4f14f"} Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.675750 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f26212e4bdc5bda5416b6956048e081a79eb4fe056e9e364faed24f7ac4f14f" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.763584 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.867974 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.868067 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.908150 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911315 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911444 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911497 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911526 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.911563 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") pod \"83490157-abed-443f-8843-945bb43715af\" (UID: \"83490157-abed-443f-8843-945bb43715af\") " Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.912431 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs" (OuterVolumeSpecName: "logs") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.917522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts" (OuterVolumeSpecName: "scripts") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.929643 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.940149 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt" (OuterVolumeSpecName: "kube-api-access-xlgnt") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "kube-api-access-xlgnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.945257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:36 crc kubenswrapper[4902]: I0121 14:53:36.952212 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data" (OuterVolumeSpecName: "config-data") pod "83490157-abed-443f-8843-945bb43715af" (UID: "83490157-abed-443f-8843-945bb43715af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013701 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013744 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83490157-abed-443f-8843-945bb43715af-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013756 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlgnt\" (UniqueName: \"kubernetes.io/projected/83490157-abed-443f-8843-945bb43715af-kube-api-access-xlgnt\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013767 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.013778 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83490157-abed-443f-8843-945bb43715af-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.028760 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.683925 4902 generic.go:334] "Generic (PLEG): container finished" podID="df9277be-e557-4d2e-b799-8fc6def975b9" containerID="ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993" exitCode=0 Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.684012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerDied","Data":"ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.686122 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerStarted","Data":"b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.693174 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695007 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b64dh" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerStarted","Data":"ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695395 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerStarted","Data":"a7b81b6927c5878e4864d8eea63ac6db97be31623e53b2291bbb5d03097d4cf8"} Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695777 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695922 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.695984 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.726212 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5684459db4-jgdkj" podStartSLOduration=3.726187633 podStartE2EDuration="3.726187633s" podCreationTimestamp="2026-01-21 14:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:37.720453631 +0000 UTC m=+1179.797286670" watchObservedRunningTime="2026-01-21 14:53:37.726187633 +0000 UTC m=+1179.803020662" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.748981 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-twg7k" podStartSLOduration=3.33336474 podStartE2EDuration="45.748965208s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.182085558 +0000 UTC m=+1136.258918587" lastFinishedPulling="2026-01-21 14:53:36.597686026 +0000 UTC m=+1178.674519055" observedRunningTime="2026-01-21 14:53:37.740402756 +0000 UTC m=+1179.817235785" watchObservedRunningTime="2026-01-21 14:53:37.748965208 +0000 UTC m=+1179.825798237" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.918091 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:53:37 crc kubenswrapper[4902]: E0121 14:53:37.918509 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83490157-abed-443f-8843-945bb43715af" containerName="placement-db-sync" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.918530 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="83490157-abed-443f-8843-945bb43715af" containerName="placement-db-sync" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.918739 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="83490157-abed-443f-8843-945bb43715af" containerName="placement-db-sync" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.919796 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.923630 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.924774 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.925172 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-26vvq" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.925406 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.925563 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.951822 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959288 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959337 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:37 crc kubenswrapper[4902]: I0121 14:53:37.959438 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.016480 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.019326 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033063 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033217 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033322 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033359 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033379 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.033407 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.135515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.135558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.135578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136113 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136523 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136786 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.136849 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.142702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.142758 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.145823 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.153949 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.156607 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.156823 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"placement-7ddf9d8f68-jjk7f\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.236472 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.711689 4902 generic.go:334] "Generic (PLEG): container finished" podID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerID="c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c" exitCode=0 Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.711775 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerDied","Data":"c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c"} Jan 21 14:53:38 crc kubenswrapper[4902]: I0121 14:53:38.791440 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.104000 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.275852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") pod \"df9277be-e557-4d2e-b799-8fc6def975b9\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.276165 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") pod \"df9277be-e557-4d2e-b799-8fc6def975b9\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.276306 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") pod \"df9277be-e557-4d2e-b799-8fc6def975b9\" (UID: \"df9277be-e557-4d2e-b799-8fc6def975b9\") " Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.280088 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df9277be-e557-4d2e-b799-8fc6def975b9" (UID: "df9277be-e557-4d2e-b799-8fc6def975b9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.280294 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8" (OuterVolumeSpecName: "kube-api-access-6tjb8") pod "df9277be-e557-4d2e-b799-8fc6def975b9" (UID: "df9277be-e557-4d2e-b799-8fc6def975b9"). InnerVolumeSpecName "kube-api-access-6tjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.303803 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df9277be-e557-4d2e-b799-8fc6def975b9" (UID: "df9277be-e557-4d2e-b799-8fc6def975b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.378006 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.378055 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df9277be-e557-4d2e-b799-8fc6def975b9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.378067 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tjb8\" (UniqueName: \"kubernetes.io/projected/df9277be-e557-4d2e-b799-8fc6def975b9-kube-api-access-6tjb8\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.772116 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerStarted","Data":"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.773126 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerStarted","Data":"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.773256 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerStarted","Data":"31d5a67184f80e0f8e30cfab691135f2f1fd9f01d89fed99d676f711a03521eb"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.781200 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.781275 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.844864 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ds4z" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.847138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ds4z" event={"ID":"df9277be-e557-4d2e-b799-8fc6def975b9","Type":"ContainerDied","Data":"1240a2082e984db724460dca85452b351506f660a1b70f26c765e2a219ef66f2"} Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.847188 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1240a2082e984db724460dca85452b351506f660a1b70f26c765e2a219ef66f2" Jan 21 14:53:39 crc kubenswrapper[4902]: I0121 14:53:39.951549 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ddf9d8f68-jjk7f" podStartSLOduration=2.9515312639999998 podStartE2EDuration="2.951531264s" podCreationTimestamp="2026-01-21 14:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:39.945540784 +0000 UTC m=+1182.022373813" watchObservedRunningTime="2026-01-21 14:53:39.951531264 +0000 UTC m=+1182.028364293" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.009576 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:53:40 crc kubenswrapper[4902]: E0121 14:53:40.010124 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" containerName="barbican-db-sync" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.010210 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" containerName="barbican-db-sync" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.010487 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" containerName="barbican-db-sync" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.011665 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.016455 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.016830 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lxg2q" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.017372 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.024603 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.100102 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.101846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.105949 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.117226 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148121 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148347 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148420 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148551 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.148621 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.167757 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.169302 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.191637 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.251944 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.251982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252013 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252036 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252074 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252101 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252119 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252158 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252216 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252284 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252350 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.252383 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.254421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.264994 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.266952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.268262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.273073 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.275322 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.276422 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.282688 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"barbican-worker-68564cb5c-bh98h\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.282910 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444117 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444157 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444274 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444302 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444323 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444351 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444405 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444492 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444534 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.444579 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.457995 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.463969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.465511 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.466173 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.467957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.471336 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.474803 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.475114 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.477035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.477733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.494690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"dnsmasq-dns-8fffc8985-dfqgq\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.503202 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"barbican-keystone-listener-b755cd77b-nd6p7\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.504080 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546641 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546800 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.546903 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.547458 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.550270 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.563843 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.563859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.580097 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"barbican-api-6445cbf9c4-z4mzt\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.608777 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.638287 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.648106 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") pod \"15ef0c45-4c21-4824-850e-545f66a2c20a\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.648319 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") pod \"15ef0c45-4c21-4824-850e-545f66a2c20a\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.648409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") pod \"15ef0c45-4c21-4824-850e-545f66a2c20a\" (UID: \"15ef0c45-4c21-4824-850e-545f66a2c20a\") " Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.659071 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2" (OuterVolumeSpecName: "kube-api-access-bz5m2") pod "15ef0c45-4c21-4824-850e-545f66a2c20a" (UID: "15ef0c45-4c21-4824-850e-545f66a2c20a"). InnerVolumeSpecName "kube-api-access-bz5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.687526 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15ef0c45-4c21-4824-850e-545f66a2c20a" (UID: "15ef0c45-4c21-4824-850e-545f66a2c20a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.708381 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.715690 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.733568 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.733575 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config" (OuterVolumeSpecName: "config") pod "15ef0c45-4c21-4824-850e-545f66a2c20a" (UID: "15ef0c45-4c21-4824-850e-545f66a2c20a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.751786 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz5m2\" (UniqueName: \"kubernetes.io/projected/15ef0c45-4c21-4824-850e-545f66a2c20a-kube-api-access-bz5m2\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.752384 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.752402 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/15ef0c45-4c21-4824-850e-545f66a2c20a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.909801 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zlh54" Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.910947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zlh54" event={"ID":"15ef0c45-4c21-4824-850e-545f66a2c20a","Type":"ContainerDied","Data":"e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f"} Jan 21 14:53:40 crc kubenswrapper[4902]: I0121 14:53:40.910986 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e397414d8ec5ae40c73abfed568886ab434c67c248ca086a30736dc5c091823f" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.018083 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.048564 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:53:41 crc kubenswrapper[4902]: E0121 14:53:41.051429 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerName="neutron-db-sync" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.051467 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerName="neutron-db-sync" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.051733 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" containerName="neutron-db-sync" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.053491 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.119703 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.193094 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.197119 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199772 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199838 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.199982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.212383 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.215580 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.215676 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.215975 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.216255 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d52d6" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.235106 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.254465 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.301895 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302159 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302192 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302272 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302409 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302448 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302524 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.302655 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.303198 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.304029 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.305992 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.306153 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.306569 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.330301 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.337969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"dnsmasq-dns-66cdd4b5b5-g578w\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405314 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405403 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.405602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.411503 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.419698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.427743 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.429825 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.440878 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.468634 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"neutron-569676bc6b-gw28h\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.551726 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.555814 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.676580 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:53:41 crc kubenswrapper[4902]: I0121 14:53:41.994416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerStarted","Data":"0adea585b27eb9363f63f38b86e1f0b5aee1a5b47c7b1b2342897a2515892311"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.003263 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerStarted","Data":"d7ec9f34e635f9308b93f9d0dc6cda96b623b10532da8d7eb05383f6117459ce"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.018379 4902 generic.go:334] "Generic (PLEG): container finished" podID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerID="d79940242a6d6496842cce005349ce2182772f08d24d0ebbf491cb39873ab862" exitCode=0 Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.018461 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" event={"ID":"6ffe1f41-e154-4eb4-a871-60bdfaee1507","Type":"ContainerDied","Data":"d79940242a6d6496842cce005349ce2182772f08d24d0ebbf491cb39873ab862"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.018508 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" event={"ID":"6ffe1f41-e154-4eb4-a871-60bdfaee1507","Type":"ContainerStarted","Data":"874a2c6cdd912e6f6172031fa775f0dcec49b3d3d1e70dc96e7ff9e1e1fbe364"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.025281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerStarted","Data":"b50cfe39dc085f1d64312e802e7695d3392dba0e54502997f28533561b89173b"} Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.242859 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:53:42 crc kubenswrapper[4902]: W0121 14:53:42.262513 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95d16264_79d8_49aa_92aa_9f95f6f88ee5.slice/crio-3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9 WatchSource:0}: Error finding container 3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9: Status 404 returned error can't find the container with id 3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9 Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.506502 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.558466 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567798 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.567966 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.568780 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.568812 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") pod \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\" (UID: \"6ffe1f41-e154-4eb4-a871-60bdfaee1507\") " Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.582700 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc" (OuterVolumeSpecName: "kube-api-access-rmpfc") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "kube-api-access-rmpfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.593458 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.594225 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config" (OuterVolumeSpecName: "config") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.597070 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: W0121 14:53:42.610941 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e9efdb_8b95_4082_8a1d_8b5a987b2516.slice/crio-d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2 WatchSource:0}: Error finding container d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2: Status 404 returned error can't find the container with id d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2 Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.650900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.674690 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675139 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmpfc\" (UniqueName: \"kubernetes.io/projected/6ffe1f41-e154-4eb4-a871-60bdfaee1507-kube-api-access-rmpfc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675239 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675331 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675386 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675493 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.675677 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.677164 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ffe1f41-e154-4eb4-a871-60bdfaee1507" (UID: "6ffe1f41-e154-4eb4-a871-60bdfaee1507"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.704315 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:42 crc kubenswrapper[4902]: I0121 14:53:42.777196 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ffe1f41-e154-4eb4-a871-60bdfaee1507-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.053274 4902 generic.go:334] "Generic (PLEG): container finished" podID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerID="1bf0f0d44db0898ad571fc3dc44b938eda2883d14d10f909c94ababfd0ac149c" exitCode=0 Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.053336 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerDied","Data":"1bf0f0d44db0898ad571fc3dc44b938eda2883d14d10f909c94ababfd0ac149c"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.053652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerStarted","Data":"3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.054827 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerStarted","Data":"d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.068499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" event={"ID":"6ffe1f41-e154-4eb4-a871-60bdfaee1507","Type":"ContainerDied","Data":"874a2c6cdd912e6f6172031fa775f0dcec49b3d3d1e70dc96e7ff9e1e1fbe364"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.068549 4902 scope.go:117] "RemoveContainer" containerID="d79940242a6d6496842cce005349ce2182772f08d24d0ebbf491cb39873ab862" Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.068676 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-dfqgq" Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.072960 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerStarted","Data":"23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c"} Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.298613 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:43 crc kubenswrapper[4902]: I0121 14:53:43.315029 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-dfqgq"] Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.081627 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerStarted","Data":"44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3"} Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.085346 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerStarted","Data":"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30"} Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.092109 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerStarted","Data":"f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692"} Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.092287 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.092303 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.128231 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podStartSLOduration=4.12821037 podStartE2EDuration="4.12821037s" podCreationTimestamp="2026-01-21 14:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:44.116701754 +0000 UTC m=+1186.193534783" watchObservedRunningTime="2026-01-21 14:53:44.12821037 +0000 UTC m=+1186.205043399" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.306562 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" path="/var/lib/kubelet/pods/6ffe1f41-e154-4eb4-a871-60bdfaee1507/volumes" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.686726 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:53:44 crc kubenswrapper[4902]: E0121 14:53:44.687113 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerName="init" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.687126 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerName="init" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.687337 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ffe1f41-e154-4eb4-a871-60bdfaee1507" containerName="init" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.691147 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.692804 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.693238 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.699561 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.815956 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816418 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816463 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.816780 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.817007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.817127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919197 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919274 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919318 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.919515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.925112 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.925206 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.927977 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.928334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.935286 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.936407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:44 crc kubenswrapper[4902]: I0121 14:53:44.945437 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"neutron-7887695489-rtxbl\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:45 crc kubenswrapper[4902]: I0121 14:53:45.008091 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:45 crc kubenswrapper[4902]: W0121 14:53:45.783942 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc3ac42_826c_4f25_a3f7_d1ab2eb8cbf5.slice/crio-b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9 WatchSource:0}: Error finding container b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9: Status 404 returned error can't find the container with id b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9 Jan 21 14:53:45 crc kubenswrapper[4902]: I0121 14:53:45.784811 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.115969 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerStarted","Data":"b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9"} Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.119305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerStarted","Data":"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69"} Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.119344 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.119366 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.136685 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" podStartSLOduration=5.136661921 podStartE2EDuration="5.136661921s" podCreationTimestamp="2026-01-21 14:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:46.133951094 +0000 UTC m=+1188.210784123" watchObservedRunningTime="2026-01-21 14:53:46.136661921 +0000 UTC m=+1188.213494950" Jan 21 14:53:46 crc kubenswrapper[4902]: I0121 14:53:46.167136 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-569676bc6b-gw28h" podStartSLOduration=5.167110913 podStartE2EDuration="5.167110913s" podCreationTimestamp="2026-01-21 14:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:46.159765705 +0000 UTC m=+1188.236598734" watchObservedRunningTime="2026-01-21 14:53:46.167110913 +0000 UTC m=+1188.243943962" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.127262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerStarted","Data":"51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c"} Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.673971 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.676131 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.678326 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.679125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.698268 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772201 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772266 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772320 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772485 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772563 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.772709 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875215 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875324 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875427 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875463 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.875713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.880367 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.880748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.884369 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.889717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.892326 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:47 crc kubenswrapper[4902]: I0121 14:53:47.895817 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"barbican-api-5df595696d-2ftxp\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:48 crc kubenswrapper[4902]: I0121 14:53:47.999074 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:51 crc kubenswrapper[4902]: I0121 14:53:51.415169 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:53:51 crc kubenswrapper[4902]: I0121 14:53:51.497234 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:53:51 crc kubenswrapper[4902]: I0121 14:53:51.497483 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" containerID="cri-o://7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303" gracePeriod=10 Jan 21 14:53:52 crc kubenswrapper[4902]: I0121 14:53:52.369573 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:53:52 crc kubenswrapper[4902]: I0121 14:53:52.372831 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:52 crc kubenswrapper[4902]: I0121 14:53:52.380272 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:53:53 crc kubenswrapper[4902]: I0121 14:53:53.200799 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdcec88e-b290-47a2-a111-f353528b337e" containerID="7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303" exitCode=0 Jan 21 14:53:53 crc kubenswrapper[4902]: I0121 14:53:53.200849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerDied","Data":"7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303"} Jan 21 14:53:53 crc kubenswrapper[4902]: I0121 14:53:53.899165 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.213079 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" event={"ID":"fdcec88e-b290-47a2-a111-f353528b337e","Type":"ContainerDied","Data":"7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153"} Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.213131 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebaaeb2e75c14c9e558716e3947916d738a9e8482d276a5a56a2360272de153" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.259467 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434344 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434383 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434438 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434544 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.434585 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") pod \"fdcec88e-b290-47a2-a111-f353528b337e\" (UID: \"fdcec88e-b290-47a2-a111-f353528b337e\") " Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.440847 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz" (OuterVolumeSpecName: "kube-api-access-d6hzz") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "kube-api-access-d6hzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.480828 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.483947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config" (OuterVolumeSpecName: "config") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.488643 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.512419 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.512573 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdcec88e-b290-47a2-a111-f353528b337e" (UID: "fdcec88e-b290-47a2-a111-f353528b337e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537391 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6hzz\" (UniqueName: \"kubernetes.io/projected/fdcec88e-b290-47a2-a111-f353528b337e-kube-api-access-d6hzz\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537423 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537432 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537440 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537449 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:54 crc kubenswrapper[4902]: I0121 14:53:54.537458 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcec88e-b290-47a2-a111-f353528b337e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.112033 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:53:55 crc kubenswrapper[4902]: W0121 14:53:55.118426 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod561efc1e_a930_440f_83b1_a75217a11f32.slice/crio-596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225 WatchSource:0}: Error finding container 596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225: Status 404 returned error can't find the container with id 596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.236358 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerStarted","Data":"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.236416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerStarted","Data":"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.241226 4902 generic.go:334] "Generic (PLEG): container finished" podID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerID="b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0" exitCode=0 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.241258 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerDied","Data":"b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243498 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerStarted","Data":"83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243593 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" containerID="cri-o://51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243657 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" containerID="cri-o://81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243717 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" containerID="cri-o://83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243729 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" containerID="cri-o://ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d" gracePeriod=30 Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.243621 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.254823 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerStarted","Data":"596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.255546 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" podStartSLOduration=2.240614697 podStartE2EDuration="15.255537307s" podCreationTimestamp="2026-01-21 14:53:40 +0000 UTC" firstStartedPulling="2026-01-21 14:53:41.731964982 +0000 UTC m=+1183.808798011" lastFinishedPulling="2026-01-21 14:53:54.746887582 +0000 UTC m=+1196.823720621" observedRunningTime="2026-01-21 14:53:55.254076336 +0000 UTC m=+1197.330909365" watchObservedRunningTime="2026-01-21 14:53:55.255537307 +0000 UTC m=+1197.332370336" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.258022 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerStarted","Data":"2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.261875 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.262150 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerStarted","Data":"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.262204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerStarted","Data":"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb"} Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.292477 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.69273282 podStartE2EDuration="1m3.292460652s" podCreationTimestamp="2026-01-21 14:52:52 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.199754258 +0000 UTC m=+1136.276587287" lastFinishedPulling="2026-01-21 14:53:54.79948209 +0000 UTC m=+1196.876315119" observedRunningTime="2026-01-21 14:53:55.28708765 +0000 UTC m=+1197.363920679" watchObservedRunningTime="2026-01-21 14:53:55.292460652 +0000 UTC m=+1197.369293671" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.319671 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7887695489-rtxbl" podStartSLOduration=11.319651722 podStartE2EDuration="11.319651722s" podCreationTimestamp="2026-01-21 14:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:55.309157205 +0000 UTC m=+1197.385990234" watchObservedRunningTime="2026-01-21 14:53:55.319651722 +0000 UTC m=+1197.396484751" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.349881 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68564cb5c-bh98h" podStartSLOduration=2.821576359 podStartE2EDuration="16.349845867s" podCreationTimestamp="2026-01-21 14:53:39 +0000 UTC" firstStartedPulling="2026-01-21 14:53:41.211232325 +0000 UTC m=+1183.288065354" lastFinishedPulling="2026-01-21 14:53:54.739501833 +0000 UTC m=+1196.816334862" observedRunningTime="2026-01-21 14:53:55.33159835 +0000 UTC m=+1197.408431379" watchObservedRunningTime="2026-01-21 14:53:55.349845867 +0000 UTC m=+1197.426678896" Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.360861 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:53:55 crc kubenswrapper[4902]: I0121 14:53:55.368510 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-xstzn"] Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.272929 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe" exitCode=0 Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273311 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb" exitCode=2 Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273329 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041" exitCode=0 Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.272991 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273428 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.273441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.276862 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerStarted","Data":"709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277245 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerStarted","Data":"b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34"} Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277288 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277312 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.277739 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.312794 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcec88e-b290-47a2-a111-f353528b337e" path="/var/lib/kubelet/pods/fdcec88e-b290-47a2-a111-f353528b337e/volumes" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.313194 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5df595696d-2ftxp" podStartSLOduration=9.313169489 podStartE2EDuration="9.313169489s" podCreationTimestamp="2026-01-21 14:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:56.303586388 +0000 UTC m=+1198.380419417" watchObservedRunningTime="2026-01-21 14:53:56.313169489 +0000 UTC m=+1198.390002518" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.687627 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886765 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886839 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.886859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.887034 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.887085 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") pod \"137b1040-d368-4b6d-a4db-ba7c626f666f\" (UID: \"137b1040-d368-4b6d-a4db-ba7c626f666f\") " Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.887898 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/137b1040-d368-4b6d-a4db-ba7c626f666f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.900190 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.901772 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts" (OuterVolumeSpecName: "scripts") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.904493 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph" (OuterVolumeSpecName: "kube-api-access-x4xph") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "kube-api-access-x4xph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.912861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.938482 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data" (OuterVolumeSpecName: "config-data") pod "137b1040-d368-4b6d-a4db-ba7c626f666f" (UID: "137b1040-d368-4b6d-a4db-ba7c626f666f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989790 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989820 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989830 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4xph\" (UniqueName: \"kubernetes.io/projected/137b1040-d368-4b6d-a4db-ba7c626f666f-kube-api-access-x4xph\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989839 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:56 crc kubenswrapper[4902]: I0121 14:53:56.989848 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/137b1040-d368-4b6d-a4db-ba7c626f666f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.291564 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-twg7k" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.294094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-twg7k" event={"ID":"137b1040-d368-4b6d-a4db-ba7c626f666f","Type":"ContainerDied","Data":"83424afb06205a5855d6b3c92c92324b00e4ab6828b9f7a1bf1115dc87d2cda6"} Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.294138 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83424afb06205a5855d6b3c92c92324b00e4ab6828b9f7a1bf1115dc87d2cda6" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671571 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: E0121 14:53:57.671934 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="init" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671951 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="init" Jan 21 14:53:57 crc kubenswrapper[4902]: E0121 14:53:57.671971 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerName="cinder-db-sync" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671978 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerName="cinder-db-sync" Jan 21 14:53:57 crc kubenswrapper[4902]: E0121 14:53:57.671991 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.671997 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.672232 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" containerName="cinder-db-sync" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.672256 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.673331 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.680674 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.680815 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.681132 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.682335 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wh7dk" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.737992 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.745103 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.746689 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.789289 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813452 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813506 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813534 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813787 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813856 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813886 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813908 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.813946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915501 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915679 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915736 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915805 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915834 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.915857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.916882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.917313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.917866 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.918174 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.918191 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.918470 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.921964 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.922783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.923388 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.932967 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.933921 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"dnsmasq-dns-75dbb546bf-94qng\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.937433 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"cinder-scheduler-0\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.960413 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.966287 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.973314 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.983676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:57 crc kubenswrapper[4902]: I0121 14:53:57.995212 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016624 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016663 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016808 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016843 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.016879 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.079849 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118496 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118563 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118639 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118689 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118719 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118786 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.118814 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.119550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.119973 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.124023 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.128430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.130124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.131079 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.142196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"cinder-api-0\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.377972 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-xstzn" podUID="fdcec88e-b290-47a2-a111-f353528b337e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.430224 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.529387 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:58 crc kubenswrapper[4902]: W0121 14:53:58.533634 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca715fd9_410d_4675_bbc0_3cfc6a3e2b14.slice/crio-12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb WatchSource:0}: Error finding container 12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb: Status 404 returned error can't find the container with id 12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb Jan 21 14:53:58 crc kubenswrapper[4902]: W0121 14:53:58.693174 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c09608f_53ce_4d79_85d0_75bf0e552380.slice/crio-d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9 WatchSource:0}: Error finding container d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9: Status 404 returned error can't find the container with id d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9 Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.721333 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:53:58 crc kubenswrapper[4902]: I0121 14:53:58.926147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:58 crc kubenswrapper[4902]: W0121 14:53:58.927485 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644ddd93_5cca_4483_b62d_548f6a863d72.slice/crio-cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad WatchSource:0}: Error finding container cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad: Status 404 returned error can't find the container with id cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.328572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerStarted","Data":"cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.330755 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerStarted","Data":"12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.334195 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerID="ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d" exitCode=0 Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.334270 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.336681 4902 generic.go:334] "Generic (PLEG): container finished" podID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerID="054019a0d14354ed0c0e875d417095f6b26794e582d8869760a6468e64837519" exitCode=0 Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.336721 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerDied","Data":"054019a0d14354ed0c0e875d417095f6b26794e582d8869760a6468e64837519"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.336746 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerStarted","Data":"d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9"} Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.684765 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747724 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747783 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747878 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747942 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.747987 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") pod \"d8d84757-ad27-4177-be9f-d7d351e771e2\" (UID: \"d8d84757-ad27-4177-be9f-d7d351e771e2\") " Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.759501 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.759718 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.780245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts" (OuterVolumeSpecName: "scripts") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.782502 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2" (OuterVolumeSpecName: "kube-api-access-bg6k2") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "kube-api-access-bg6k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849702 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849736 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg6k2\" (UniqueName: \"kubernetes.io/projected/d8d84757-ad27-4177-be9f-d7d351e771e2-kube-api-access-bg6k2\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849749 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.849759 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8d84757-ad27-4177-be9f-d7d351e771e2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.883156 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.895549 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.944618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.956904 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:59 crc kubenswrapper[4902]: I0121 14:53:59.956947 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.018706 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data" (OuterVolumeSpecName: "config-data") pod "d8d84757-ad27-4177-be9f-d7d351e771e2" (UID: "d8d84757-ad27-4177-be9f-d7d351e771e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.058321 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d84757-ad27-4177-be9f-d7d351e771e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.363847 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerStarted","Data":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.386567 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d8d84757-ad27-4177-be9f-d7d351e771e2","Type":"ContainerDied","Data":"93b4216849acc7e83ad93b11dfedacb592b887f2e39ca3b5b2c28470072e2c3e"} Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.386615 4902 scope.go:117] "RemoveContainer" containerID="83d78b65070bf1a4a82cd01725c6ee07441aaf87ba91a13a89576279ece3cbbe" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.386831 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.391671 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerStarted","Data":"58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94"} Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.391987 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.435144 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" podStartSLOduration=3.435127696 podStartE2EDuration="3.435127696s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:00.416306463 +0000 UTC m=+1202.493139492" watchObservedRunningTime="2026-01-21 14:54:00.435127696 +0000 UTC m=+1202.511960715" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.446088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.457751 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472373 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472758 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472771 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472786 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472792 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472812 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472818 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: E0121 14:54:00.472836 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.472842 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473009 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-central-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473021 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="ceilometer-notification-agent" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473031 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="sg-core" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.473065 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" containerName="proxy-httpd" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.474666 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.480032 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.480309 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484482 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484525 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484581 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484684 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484711 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484839 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.484877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.487700 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.541140 4902 scope.go:117] "RemoveContainer" containerID="81c864035291d1070b465dc09d84ebe0421ec1a192cefda871ae7e174cd7d5cb" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.585630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.585725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586669 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586734 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586844 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.586874 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.590073 4902 scope.go:117] "RemoveContainer" containerID="ae5941c9751c3e64a04d8b1b8d712d2bab081ccec45d54e0c30b1caa64b2df4d" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.595685 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.596269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.596896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.597004 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.603864 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"ceilometer-0\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " pod="openstack/ceilometer-0" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.667706 4902 scope.go:117] "RemoveContainer" containerID="51f743b434cb645595957492e22a74e7b49af55c8dc22934431bf87a6f8fd041" Jan 21 14:54:00 crc kubenswrapper[4902]: I0121 14:54:00.861309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.401866 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerStarted","Data":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.402447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.402114 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" containerID="cri-o://4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" gracePeriod=30 Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.402008 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" containerID="cri-o://d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" gracePeriod=30 Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.405257 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerStarted","Data":"174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871"} Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.405499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerStarted","Data":"fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508"} Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.428490 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.428462169 podStartE2EDuration="4.428462169s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:01.423653483 +0000 UTC m=+1203.500486512" watchObservedRunningTime="2026-01-21 14:54:01.428462169 +0000 UTC m=+1203.505295198" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.449083 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.38857455 podStartE2EDuration="4.449065832s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="2026-01-21 14:53:58.536462612 +0000 UTC m=+1200.613295641" lastFinishedPulling="2026-01-21 14:53:59.596953894 +0000 UTC m=+1201.673786923" observedRunningTime="2026-01-21 14:54:01.447283582 +0000 UTC m=+1203.524116611" watchObservedRunningTime="2026-01-21 14:54:01.449065832 +0000 UTC m=+1203.525898861" Jan 21 14:54:01 crc kubenswrapper[4902]: I0121 14:54:01.492426 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.047883 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234085 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234157 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234217 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234256 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234333 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234374 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234408 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") pod \"644ddd93-5cca-4483-b62d-548f6a863d72\" (UID: \"644ddd93-5cca-4483-b62d-548f6a863d72\") " Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234419 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.234816 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs" (OuterVolumeSpecName: "logs") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.235080 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/644ddd93-5cca-4483-b62d-548f6a863d72-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.235104 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/644ddd93-5cca-4483-b62d-548f6a863d72-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.239696 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts" (OuterVolumeSpecName: "scripts") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.239705 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g" (OuterVolumeSpecName: "kube-api-access-hrr4g") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "kube-api-access-hrr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.239986 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.264378 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.287243 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data" (OuterVolumeSpecName: "config-data") pod "644ddd93-5cca-4483-b62d-548f6a863d72" (UID: "644ddd93-5cca-4483-b62d-548f6a863d72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.305501 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d84757-ad27-4177-be9f-d7d351e771e2" path="/var/lib/kubelet/pods/d8d84757-ad27-4177-be9f-d7d351e771e2/volumes" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336706 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336745 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336758 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336772 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/644ddd93-5cca-4483-b62d-548f6a863d72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.336785 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrr4g\" (UniqueName: \"kubernetes.io/projected/644ddd93-5cca-4483-b62d-548f6a863d72-kube-api-access-hrr4g\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421115 4902 generic.go:334] "Generic (PLEG): container finished" podID="644ddd93-5cca-4483-b62d-548f6a863d72" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" exitCode=0 Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421149 4902 generic.go:334] "Generic (PLEG): container finished" podID="644ddd93-5cca-4483-b62d-548f6a863d72" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" exitCode=143 Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421268 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerDied","Data":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerDied","Data":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421617 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"644ddd93-5cca-4483-b62d-548f6a863d72","Type":"ContainerDied","Data":"cc0cb71f6492ec85d79117f468c95c7c85fc7030893dc36c8264cffcce5d3dad"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.421626 4902 scope.go:117] "RemoveContainer" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.424293 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"e69baea6eee432132f0068671e39127960be033916e49020781e5d192b5eaecc"} Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.460374 4902 scope.go:117] "RemoveContainer" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.470798 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.479484 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.486828 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.487356 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.487377 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.487402 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.487412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.492075 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api-log" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.492113 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" containerName="cinder-api" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.493978 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.500123 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.500372 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.500552 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.506295 4902 scope.go:117] "RemoveContainer" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.508667 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": container with ID starting with 4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7 not found: ID does not exist" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.508732 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} err="failed to get container status \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": rpc error: code = NotFound desc = could not find container \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": container with ID starting with 4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.508759 4902 scope.go:117] "RemoveContainer" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: E0121 14:54:02.509264 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": container with ID starting with d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4 not found: ID does not exist" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509311 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} err="failed to get container status \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": rpc error: code = NotFound desc = could not find container \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": container with ID starting with d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509345 4902 scope.go:117] "RemoveContainer" containerID="4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509794 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7"} err="failed to get container status \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": rpc error: code = NotFound desc = could not find container \"4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7\": container with ID starting with 4f16264abb671dc3d31c1462c3e6408791a89b2450134e1b8623a5e7945506f7 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.509815 4902 scope.go:117] "RemoveContainer" containerID="d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.510163 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4"} err="failed to get container status \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": rpc error: code = NotFound desc = could not find container \"d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4\": container with ID starting with d45ba5aa5ab5491a35af9a1dcfc90440b230bb43a6df96ee7e39b45e3aa558b4 not found: ID does not exist" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.520538 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.652796 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653072 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653149 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653228 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653332 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.653385 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755486 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755570 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.755745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756455 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756491 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.756543 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.760375 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.760425 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.761113 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.768272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.774745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.776742 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.781196 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"cinder-api-0\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.823146 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:54:02 crc kubenswrapper[4902]: I0121 14:54:02.995848 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.285706 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.435727 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerStarted","Data":"cf192cd4c08d4018b743f3dc19c0686fe97811bb3b64651346fb935eec9339db"} Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.437548 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6"} Jan 21 14:54:03 crc kubenswrapper[4902]: I0121 14:54:03.437589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd"} Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.321488 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644ddd93-5cca-4483-b62d-548f6a863d72" path="/var/lib/kubelet/pods/644ddd93-5cca-4483-b62d-548f6a863d72/volumes" Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.451847 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8"} Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.453602 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerStarted","Data":"d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f"} Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.602311 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.816766 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.881817 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.882185 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" containerID="cri-o://f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692" gracePeriod=30 Jan 21 14:54:04 crc kubenswrapper[4902]: I0121 14:54:04.882369 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" containerID="cri-o://23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c" gracePeriod=30 Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.582329 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerStarted","Data":"4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674"} Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.583623 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.589869 4902 generic.go:334] "Generic (PLEG): container finished" podID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerID="23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c" exitCode=143 Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.590793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerDied","Data":"23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c"} Jan 21 14:54:05 crc kubenswrapper[4902]: I0121 14:54:05.606177 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.606162934 podStartE2EDuration="3.606162934s" podCreationTimestamp="2026-01-21 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:05.601779259 +0000 UTC m=+1207.678612288" watchObservedRunningTime="2026-01-21 14:54:05.606162934 +0000 UTC m=+1207.682995963" Jan 21 14:54:06 crc kubenswrapper[4902]: I0121 14:54:06.610587 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerStarted","Data":"51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809"} Jan 21 14:54:06 crc kubenswrapper[4902]: I0121 14:54:06.640475 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.829894052 podStartE2EDuration="6.640455115s" podCreationTimestamp="2026-01-21 14:54:00 +0000 UTC" firstStartedPulling="2026-01-21 14:54:01.557273255 +0000 UTC m=+1203.634106284" lastFinishedPulling="2026-01-21 14:54:05.367834318 +0000 UTC m=+1207.444667347" observedRunningTime="2026-01-21 14:54:06.639030844 +0000 UTC m=+1208.715863883" watchObservedRunningTime="2026-01-21 14:54:06.640455115 +0000 UTC m=+1208.717288144" Jan 21 14:54:06 crc kubenswrapper[4902]: I0121 14:54:06.989957 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.633755 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.876544 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.877703 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.881563 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.884393 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-58xqz" Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.892296 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 14:54:07 crc kubenswrapper[4902]: I0121 14:54:07.910764 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037032 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037201 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037251 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.037393 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.082228 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139207 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139258 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139668 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:53984->10.217.0.156:9311: read: connection reset by peer" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.139824 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6445cbf9c4-z4mzt" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:53996->10.217.0.156:9311: read: connection reset by peer" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.140430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.145849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.163779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.177460 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"openstackclient\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.197308 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.197675 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" containerID="cri-o://44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3" gracePeriod=10 Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.223636 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.698960 4902 generic.go:334] "Generic (PLEG): container finished" podID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerID="f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692" exitCode=0 Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.699613 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerDied","Data":"f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692"} Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.756363 4902 generic.go:334] "Generic (PLEG): container finished" podID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerID="44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3" exitCode=0 Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.756555 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerDied","Data":"44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3"} Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.863481 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 14:54:08 crc kubenswrapper[4902]: I0121 14:54:08.965619 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.045737 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091457 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091797 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.091852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") pod \"b8579d67-5e61-40f2-9725-b695f7d7bb81\" (UID: \"b8579d67-5e61-40f2-9725-b695f7d7bb81\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.095649 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs" (OuterVolumeSpecName: "logs") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.101254 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.103586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l" (OuterVolumeSpecName: "kube-api-access-8vz6l") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "kube-api-access-8vz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.142173 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.191456 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data" (OuterVolumeSpecName: "config-data") pod "b8579d67-5e61-40f2-9725-b695f7d7bb81" (UID: "b8579d67-5e61-40f2-9725-b695f7d7bb81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196256 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196308 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196322 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8579d67-5e61-40f2-9725-b695f7d7bb81-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196337 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8579d67-5e61-40f2-9725-b695f7d7bb81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.196350 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vz6l\" (UniqueName: \"kubernetes.io/projected/b8579d67-5e61-40f2-9725-b695f7d7bb81-kube-api-access-8vz6l\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.305771 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.320299 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400083 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400251 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400381 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400457 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400531 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.400598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") pod \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\" (UID: \"95d16264-79d8-49aa-92aa-9f95f6f88ee5\") " Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.438233 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz" (OuterVolumeSpecName: "kube-api-access-lxthz") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "kube-api-access-lxthz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.502991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.504450 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.504563 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxthz\" (UniqueName: \"kubernetes.io/projected/95d16264-79d8-49aa-92aa-9f95f6f88ee5-kube-api-access-lxthz\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.527846 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config" (OuterVolumeSpecName: "config") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.547622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.554683 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.564567 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95d16264-79d8-49aa-92aa-9f95f6f88ee5" (UID: "95d16264-79d8-49aa-92aa-9f95f6f88ee5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606016 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606071 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606086 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.606101 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95d16264-79d8-49aa-92aa-9f95f6f88ee5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.767814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6445cbf9c4-z4mzt" event={"ID":"b8579d67-5e61-40f2-9725-b695f7d7bb81","Type":"ContainerDied","Data":"b50cfe39dc085f1d64312e802e7695d3392dba0e54502997f28533561b89173b"} Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.767894 4902 scope.go:117] "RemoveContainer" containerID="f1c482b66b7a7732a3644035350a84867090e3f9916167da7455775012f80692" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.768087 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6445cbf9c4-z4mzt" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.782739 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b14dfbd1-cf80-4ba8-9372-ca5767f5d689","Type":"ContainerStarted","Data":"e3a77e759e3882d3e95a1ef76b5a499badcc1c22a11d9265cd99602a2c5102a4"} Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.788922 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" containerID="cri-o://fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508" gracePeriod=30 Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.789284 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.791190 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" containerID="cri-o://174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871" gracePeriod=30 Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.791295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-g578w" event={"ID":"95d16264-79d8-49aa-92aa-9f95f6f88ee5","Type":"ContainerDied","Data":"3301d34c981a80c2f15194742c15041f8dd34876ffe78731a10b3a3473a032e9"} Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.835283 4902 scope.go:117] "RemoveContainer" containerID="23ea5f83fdd14483bee45b939eae36b3f6796ce08c5e8747aa8cd9acb4874d6c" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.839274 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.867097 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6445cbf9c4-z4mzt"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.894177 4902 scope.go:117] "RemoveContainer" containerID="44d81e25a8e35f144214a1d8f5c1fb7aa5a894a9f7d53ffd55c93106fb15e4c3" Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.897236 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.906517 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-g578w"] Jan 21 14:54:09 crc kubenswrapper[4902]: I0121 14:54:09.966468 4902 scope.go:117] "RemoveContainer" containerID="1bf0f0d44db0898ad571fc3dc44b938eda2883d14d10f909c94ababfd0ac149c" Jan 21 14:54:10 crc kubenswrapper[4902]: I0121 14:54:10.307207 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" path="/var/lib/kubelet/pods/95d16264-79d8-49aa-92aa-9f95f6f88ee5/volumes" Jan 21 14:54:10 crc kubenswrapper[4902]: I0121 14:54:10.307966 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" path="/var/lib/kubelet/pods/b8579d67-5e61-40f2-9725-b695f7d7bb81/volumes" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.563338 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.638458 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.655778 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.827882 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerID="174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871" exitCode=0 Jan 21 14:54:11 crc kubenswrapper[4902]: I0121 14:54:11.828793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerDied","Data":"174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871"} Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.852549 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerID="fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508" exitCode=0 Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.852780 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerDied","Data":"fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508"} Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.942305 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979273 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979350 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979391 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979448 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.979669 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") pod \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\" (UID: \"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14\") " Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.982186 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.986719 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4902]: I0121 14:54:12.992235 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk" (OuterVolumeSpecName: "kube-api-access-msnnk") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "kube-api-access-msnnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:12.999115 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts" (OuterVolumeSpecName: "scripts") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082500 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msnnk\" (UniqueName: \"kubernetes.io/projected/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-kube-api-access-msnnk\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082538 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082550 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.082564 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.125386 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.131071 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data" (OuterVolumeSpecName: "config-data") pod "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" (UID: "ca715fd9-410d-4675-bbc0-3cfc6a3e2b14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.186407 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.186612 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.878704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ca715fd9-410d-4675-bbc0-3cfc6a3e2b14","Type":"ContainerDied","Data":"12b1ce4108345896bb01a54435bff262c712e46700037435ba35ac6fcf91daeb"} Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.878760 4902 scope.go:117] "RemoveContainer" containerID="174cb307680316a6f5706c69b676f7998050e192675c67de1f05b839419fa871" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.878916 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.916532 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.920375 4902 scope.go:117] "RemoveContainer" containerID="fde7aac6a44c88fcbc975cf724be75aaa4036c1671f2b89f5e2108d9cd43b508" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.928013 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958113 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958560 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958579 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958608 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958619 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958629 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958638 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958653 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="init" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958660 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="init" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958671 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958679 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" Jan 21 14:54:13 crc kubenswrapper[4902]: E0121 14:54:13.958694 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958702 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958888 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="probe" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958905 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api-log" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958918 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8579d67-5e61-40f2-9725-b695f7d7bb81" containerName="barbican-api" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958938 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d16264-79d8-49aa-92aa-9f95f6f88ee5" containerName="dnsmasq-dns" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.958957 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" containerName="cinder-scheduler" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.960128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.965638 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 14:54:13 crc kubenswrapper[4902]: I0121 14:54:13.969800 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000642 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000860 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.000954 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102884 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.102982 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.103135 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.104582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.111409 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.112846 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.126219 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.126226 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.132620 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.299610 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.305575 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca715fd9-410d-4675-bbc0-3cfc6a3e2b14" path="/var/lib/kubelet/pods/ca715fd9-410d-4675-bbc0-3cfc6a3e2b14/volumes" Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.757005 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:54:14 crc kubenswrapper[4902]: I0121 14:54:14.890965 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerStarted","Data":"83d1b2eb20981f2a9a2a1eda26c8252ba222ee4a68dd3f7546c40138c8e10370"} Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.033372 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.092892 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.093165 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569676bc6b-gw28h" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" containerID="cri-o://a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" gracePeriod=30 Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.093551 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569676bc6b-gw28h" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" containerID="cri-o://6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" gracePeriod=30 Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.657083 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.914131 4902 generic.go:334] "Generic (PLEG): container finished" podID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" exitCode=0 Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.914201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerDied","Data":"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69"} Jan 21 14:54:15 crc kubenswrapper[4902]: I0121 14:54:15.918293 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerStarted","Data":"669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c"} Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.307674 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.309280 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.309378 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.315779 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.316805 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.316962 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.477920 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.478241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.478266 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.478292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479550 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479676 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.479707 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.580921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.580965 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.580994 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581033 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581078 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581128 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581160 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.581702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.584080 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.586629 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.587532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.587950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.589481 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.599850 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.599941 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"swift-proxy-54bc9cbc97-hx966\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.657616 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.930976 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerStarted","Data":"c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001"} Jan 21 14:54:16 crc kubenswrapper[4902]: I0121 14:54:16.957645 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.957622465 podStartE2EDuration="3.957622465s" podCreationTimestamp="2026-01-21 14:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:16.951637946 +0000 UTC m=+1219.028470985" watchObservedRunningTime="2026-01-21 14:54:16.957622465 +0000 UTC m=+1219.034455494" Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.276891 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.769641 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.770216 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.943135 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerStarted","Data":"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398"} Jan 21 14:54:17 crc kubenswrapper[4902]: I0121 14:54:17.943188 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerStarted","Data":"e83ca63bcfd9328da7616c6b5c09b31fc0bd4751ea531f09a2e1f38c1a7f3d76"} Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.953425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerStarted","Data":"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb"} Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.953891 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.953911 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:18 crc kubenswrapper[4902]: I0121 14:54:18.988567 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-54bc9cbc97-hx966" podStartSLOduration=2.988538383 podStartE2EDuration="2.988538383s" podCreationTimestamp="2026-01-21 14:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:18.981534794 +0000 UTC m=+1221.058367823" watchObservedRunningTime="2026-01-21 14:54:18.988538383 +0000 UTC m=+1221.065371412" Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254595 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254945 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" containerID="cri-o://4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254989 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" containerID="cri-o://cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.254985 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" containerID="cri-o://80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.255108 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" containerID="cri-o://51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809" gracePeriod=30 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.264514 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.164:3000/\": EOF" Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.300070 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970604 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809" exitCode=0 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970645 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8" exitCode=2 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970658 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd" exitCode=0 Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970674 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809"} Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8"} Jan 21 14:54:19 crc kubenswrapper[4902]: I0121 14:54:19.970742 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd"} Jan 21 14:54:21 crc kubenswrapper[4902]: I0121 14:54:21.995636 4902 generic.go:334] "Generic (PLEG): container finished" podID="e729f055-6d31-4994-8561-fbefd5aba351" containerID="cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6" exitCode=0 Jan 21 14:54:21 crc kubenswrapper[4902]: I0121 14:54:21.996254 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6"} Jan 21 14:54:24 crc kubenswrapper[4902]: I0121 14:54:24.526006 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.853697 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925075 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925180 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925240 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925294 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925350 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925378 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925411 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") pod \"e729f055-6d31-4994-8561-fbefd5aba351\" (UID: \"e729f055-6d31-4994-8561-fbefd5aba351\") " Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.925840 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.929197 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.929538 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts" (OuterVolumeSpecName: "scripts") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.929728 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2" (OuterVolumeSpecName: "kube-api-access-gszw2") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "kube-api-access-gszw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:25 crc kubenswrapper[4902]: I0121 14:54:25.979786 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028548 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gszw2\" (UniqueName: \"kubernetes.io/projected/e729f055-6d31-4994-8561-fbefd5aba351-kube-api-access-gszw2\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028588 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028604 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028616 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e729f055-6d31-4994-8561-fbefd5aba351-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.028629 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.050190 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.091079 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data" (OuterVolumeSpecName: "config-data") pod "e729f055-6d31-4994-8561-fbefd5aba351" (UID: "e729f055-6d31-4994-8561-fbefd5aba351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.130097 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.130126 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e729f055-6d31-4994-8561-fbefd5aba351-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.139135 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231303 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231367 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231418 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231441 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.231486 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") pod \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\" (UID: \"b2e9efdb-8b95-4082-8a1d-8b5a987b2516\") " Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.235478 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.236778 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l" (OuterVolumeSpecName: "kube-api-access-slg7l") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "kube-api-access-slg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.290026 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config" (OuterVolumeSpecName: "config") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.297749 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.309234 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b2e9efdb-8b95-4082-8a1d-8b5a987b2516" (UID: "b2e9efdb-8b95-4082-8a1d-8b5a987b2516"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333508 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333569 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333582 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333612 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.333626 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg7l\" (UniqueName: \"kubernetes.io/projected/b2e9efdb-8b95-4082-8a1d-8b5a987b2516-kube-api-access-slg7l\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426417 4902 generic.go:334] "Generic (PLEG): container finished" podID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" exitCode=0 Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426751 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerDied","Data":"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426789 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569676bc6b-gw28h" event={"ID":"b2e9efdb-8b95-4082-8a1d-8b5a987b2516","Type":"ContainerDied","Data":"d6d295ac44d5e84b8146c28859766bda166d60fe457d50975c21ca70126c4bc2"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426800 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569676bc6b-gw28h" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.426814 4902 scope.go:117] "RemoveContainer" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.429841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b14dfbd1-cf80-4ba8-9372-ca5767f5d689","Type":"ContainerStarted","Data":"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.443079 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e729f055-6d31-4994-8561-fbefd5aba351","Type":"ContainerDied","Data":"e69baea6eee432132f0068671e39127960be033916e49020781e5d192b5eaecc"} Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.443217 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.454297 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.207483727 podStartE2EDuration="19.454281733s" podCreationTimestamp="2026-01-21 14:54:07 +0000 UTC" firstStartedPulling="2026-01-21 14:54:09.319224308 +0000 UTC m=+1211.396057337" lastFinishedPulling="2026-01-21 14:54:25.566022314 +0000 UTC m=+1227.642855343" observedRunningTime="2026-01-21 14:54:26.449735054 +0000 UTC m=+1228.526568083" watchObservedRunningTime="2026-01-21 14:54:26.454281733 +0000 UTC m=+1228.531114762" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.533737 4902 scope.go:117] "RemoveContainer" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.539087 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.554953 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-569676bc6b-gw28h"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.563071 4902 scope.go:117] "RemoveContainer" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.563666 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69\": container with ID starting with 6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69 not found: ID does not exist" containerID="6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.563707 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69"} err="failed to get container status \"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69\": rpc error: code = NotFound desc = could not find container \"6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69\": container with ID starting with 6645f52e37b674d465bfc07d3f68af5aece84ac46e8ea039e1d09e361c2d7d69 not found: ID does not exist" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.563732 4902 scope.go:117] "RemoveContainer" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.564460 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30\": container with ID starting with a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30 not found: ID does not exist" containerID="a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.564486 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30"} err="failed to get container status \"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30\": rpc error: code = NotFound desc = could not find container \"a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30\": container with ID starting with a0688c310036ee518232446cbece83e11326a9320d3005715d19155f9e9b3a30 not found: ID does not exist" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.564502 4902 scope.go:117] "RemoveContainer" containerID="51096ea975abaac8a431582dfeae27fd25ef6ce3f6de1cc2cf3f257b1bb51809" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.568315 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.578668 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.588782 4902 scope.go:117] "RemoveContainer" containerID="80d76f992bb8bfbe9c73393b52cc22c46d041fdcc2adb3e649bc943a55253ef8" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593323 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593786 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593809 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593845 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593855 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593874 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593881 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593896 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593904 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593921 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593930 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: E0121 14:54:26.593950 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.593958 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594184 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-notification-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594203 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="proxy-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594213 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-api" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594227 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="ceilometer-central-agent" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594238 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" containerName="neutron-httpd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.594251 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729f055-6d31-4994-8561-fbefd5aba351" containerName="sg-core" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.596186 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.598362 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.598744 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.610734 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.610750 4902 scope.go:117] "RemoveContainer" containerID="cb93f6564b507552595aee7a7e6446eef24375fdbc49027675bd228983bf00c6" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.635313 4902 scope.go:117] "RemoveContainer" containerID="4d2c354c316e2909a7d5339fb8d71148e96c6d6c6e258b111839778343d895fd" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637638 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637723 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.637885 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.663624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.668631 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.738979 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739212 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739308 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739347 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739374 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.739443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.743105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.745114 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.747938 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.750779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.756024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.761034 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.767859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ceilometer-0\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " pod="openstack/ceilometer-0" Jan 21 14:54:26 crc kubenswrapper[4902]: I0121 14:54:26.923547 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:27 crc kubenswrapper[4902]: I0121 14:54:27.428213 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:27 crc kubenswrapper[4902]: I0121 14:54:27.454648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"01acb75f5aa7a52c23b6938805bfcfd86387b55bac88b2a0c34ff3a7e37b8a51"} Jan 21 14:54:28 crc kubenswrapper[4902]: I0121 14:54:28.305905 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e9efdb-8b95-4082-8a1d-8b5a987b2516" path="/var/lib/kubelet/pods/b2e9efdb-8b95-4082-8a1d-8b5a987b2516/volumes" Jan 21 14:54:28 crc kubenswrapper[4902]: I0121 14:54:28.306536 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e729f055-6d31-4994-8561-fbefd5aba351" path="/var/lib/kubelet/pods/e729f055-6d31-4994-8561-fbefd5aba351/volumes" Jan 21 14:54:31 crc kubenswrapper[4902]: I0121 14:54:31.502836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3"} Jan 21 14:54:31 crc kubenswrapper[4902]: I0121 14:54:31.503363 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189"} Jan 21 14:54:32 crc kubenswrapper[4902]: I0121 14:54:32.139494 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:32 crc kubenswrapper[4902]: I0121 14:54:32.513873 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a"} Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.529908 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerStarted","Data":"d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619"} Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530220 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" containerID="cri-o://d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530243 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" containerID="cri-o://9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530258 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" containerID="cri-o://608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530255 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.530189 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" containerID="cri-o://2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189" gracePeriod=30 Jan 21 14:54:33 crc kubenswrapper[4902]: I0121 14:54:33.577724 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.822673118 podStartE2EDuration="7.577701869s" podCreationTimestamp="2026-01-21 14:54:26 +0000 UTC" firstStartedPulling="2026-01-21 14:54:27.440172175 +0000 UTC m=+1229.517005204" lastFinishedPulling="2026-01-21 14:54:33.195200926 +0000 UTC m=+1235.272033955" observedRunningTime="2026-01-21 14:54:33.555427775 +0000 UTC m=+1235.632260804" watchObservedRunningTime="2026-01-21 14:54:33.577701869 +0000 UTC m=+1235.654534898" Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541538 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619" exitCode=0 Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541899 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a" exitCode=2 Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541910 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3" exitCode=0 Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619"} Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541952 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a"} Jan 21 14:54:34 crc kubenswrapper[4902]: I0121 14:54:34.541970 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3"} Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.249792 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.250103 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" containerID="cri-o://5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456" gracePeriod=30 Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.250190 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" containerID="cri-o://71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19" gracePeriod=30 Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.564324 4902 generic.go:334] "Generic (PLEG): container finished" podID="30ff158a-452e-4180-b99e-9a171035d794" containerID="5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456" exitCode=143 Jan 21 14:54:35 crc kubenswrapper[4902]: I0121 14:54:35.564441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerDied","Data":"5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456"} Jan 21 14:54:36 crc kubenswrapper[4902]: I0121 14:54:36.801510 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:36 crc kubenswrapper[4902]: I0121 14:54:36.802675 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" containerID="cri-o://42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72" gracePeriod=30 Jan 21 14:54:36 crc kubenswrapper[4902]: I0121 14:54:36.802967 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" containerID="cri-o://14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df" gracePeriod=30 Jan 21 14:54:37 crc kubenswrapper[4902]: I0121 14:54:37.584478 4902 generic.go:334] "Generic (PLEG): container finished" podID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerID="42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72" exitCode=143 Jan 21 14:54:37 crc kubenswrapper[4902]: I0121 14:54:37.584596 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerDied","Data":"42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72"} Jan 21 14:54:38 crc kubenswrapper[4902]: I0121 14:54:38.611141 4902 generic.go:334] "Generic (PLEG): container finished" podID="30ff158a-452e-4180-b99e-9a171035d794" containerID="71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19" exitCode=0 Jan 21 14:54:38 crc kubenswrapper[4902]: I0121 14:54:38.611225 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerDied","Data":"71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19"} Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.125952 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319658 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319894 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.319983 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320116 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320220 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320258 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") pod \"30ff158a-452e-4180-b99e-9a171035d794\" (UID: \"30ff158a-452e-4180-b99e-9a171035d794\") " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320351 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320868 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.320866 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs" (OuterVolumeSpecName: "logs") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.327258 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr" (OuterVolumeSpecName: "kube-api-access-5x9kr") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "kube-api-access-5x9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.327310 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts" (OuterVolumeSpecName: "scripts") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.327975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.349015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.374544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.388015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data" (OuterVolumeSpecName: "config-data") pod "30ff158a-452e-4180-b99e-9a171035d794" (UID: "30ff158a-452e-4180-b99e-9a171035d794"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423105 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9kr\" (UniqueName: \"kubernetes.io/projected/30ff158a-452e-4180-b99e-9a171035d794-kube-api-access-5x9kr\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423150 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30ff158a-452e-4180-b99e-9a171035d794-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423167 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423180 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423223 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423238 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.423250 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ff158a-452e-4180-b99e-9a171035d794-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.446780 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.524636 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.621920 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"30ff158a-452e-4180-b99e-9a171035d794","Type":"ContainerDied","Data":"fd2671c5a041bc1da6743eacf4aa7bb033c883d2faa27ca05b2e9c42b04bf8ce"} Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.621982 4902 scope.go:117] "RemoveContainer" containerID="71cca2f7cf5320c189b79d957584fa123f879c7a9cb4707bef6dd5f5eb455d19" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.622020 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.666672 4902 scope.go:117] "RemoveContainer" containerID="5409eefc8bdd22f49ffbcce65cbb54882443f5c301c1ffa55abf84f9c6380456" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.668006 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.721885 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.754337 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: E0121 14:54:39.754934 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.754967 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" Jan 21 14:54:39 crc kubenswrapper[4902]: E0121 14:54:39.754991 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.754998 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.755267 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-httpd" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.755293 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ff158a-452e-4180-b99e-9a171035d794" containerName="glance-log" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.756725 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.759882 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.761414 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.763666 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931442 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931474 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931526 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931857 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.931948 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.957382 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.148:9292/healthcheck\": read tcp 10.217.0.2:49572->10.217.0.148:9292: read: connection reset by peer" Jan 21 14:54:39 crc kubenswrapper[4902]: I0121 14:54:39.958167 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.148:9292/healthcheck\": read tcp 10.217.0.2:49568->10.217.0.148:9292: read: connection reset by peer" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.034829 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035542 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035655 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035797 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035813 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.035941 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.036542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.036897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043273 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.043347 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.057094 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.069938 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.076309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.314848 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ff158a-452e-4180-b99e-9a171035d794" path="/var/lib/kubelet/pods/30ff158a-452e-4180-b99e-9a171035d794/volumes" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.637876 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff47e21a-75a1-4d66-b599-725966fa456e" containerID="2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189" exitCode=0 Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.638174 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189"} Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.640205 4902 generic.go:334] "Generic (PLEG): container finished" podID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerID="14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df" exitCode=0 Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.640251 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerDied","Data":"14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df"} Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.850686 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.954828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.954920 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955069 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955123 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.955147 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") pod \"ff47e21a-75a1-4d66-b599-725966fa456e\" (UID: \"ff47e21a-75a1-4d66-b599-725966fa456e\") " Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.956411 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.957006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.962494 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts" (OuterVolumeSpecName: "scripts") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4902]: I0121 14:54:40.994278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r" (OuterVolumeSpecName: "kube-api-access-wmm5r") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "kube-api-access-wmm5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.004271 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057552 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057594 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057606 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057621 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmm5r\" (UniqueName: \"kubernetes.io/projected/ff47e21a-75a1-4d66-b599-725966fa456e-kube-api-access-wmm5r\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.057633 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff47e21a-75a1-4d66-b599-725966fa456e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.085203 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.085585 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.150140 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data" (OuterVolumeSpecName: "config-data") pod "ff47e21a-75a1-4d66-b599-725966fa456e" (UID: "ff47e21a-75a1-4d66-b599-725966fa456e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.159546 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.159584 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff47e21a-75a1-4d66-b599-725966fa456e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.260812 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261150 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261189 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261231 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261328 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261413 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.261442 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") pod \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\" (UID: \"5048d12c-b66b-4f2f-a706-0e2978b5f0db\") " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.263714 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs" (OuterVolumeSpecName: "logs") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.263898 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.266441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.266921 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc" (OuterVolumeSpecName: "kube-api-access-tphvc") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "kube-api-access-tphvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.269166 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts" (OuterVolumeSpecName: "scripts") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.290586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.325074 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.337956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data" (OuterVolumeSpecName: "config-data") pod "5048d12c-b66b-4f2f-a706-0e2978b5f0db" (UID: "5048d12c-b66b-4f2f-a706-0e2978b5f0db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363896 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363943 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363957 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363971 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363982 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tphvc\" (UniqueName: \"kubernetes.io/projected/5048d12c-b66b-4f2f-a706-0e2978b5f0db-kube-api-access-tphvc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.363994 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.364006 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5048d12c-b66b-4f2f-a706-0e2978b5f0db-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.364018 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5048d12c-b66b-4f2f-a706-0e2978b5f0db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.385948 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.467403 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.651480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5048d12c-b66b-4f2f-a706-0e2978b5f0db","Type":"ContainerDied","Data":"8a331ab6e6d4779bd2c4ccae990c6e7b561e92f584c35ef4a58e44ff1375f620"} Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.651539 4902 scope.go:117] "RemoveContainer" containerID="14edd557813066a0cf1d74f214913ca1b420c477db44eb9e034dfe2ede5a72df" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.651675 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.666455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff47e21a-75a1-4d66-b599-725966fa456e","Type":"ContainerDied","Data":"01acb75f5aa7a52c23b6938805bfcfd86387b55bac88b2a0c34ff3a7e37b8a51"} Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.666534 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.700811 4902 scope.go:117] "RemoveContainer" containerID="42387b53645327b4ee53a9ee8c0b9dee11eb438a26b3f114231094d19f35ba72" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.703131 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.720093 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.734082 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.737233 4902 scope.go:117] "RemoveContainer" containerID="d65b8776fa29099f87b966a2054c9cd54a03d0fce6f73cb30c4f261bbc860619" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.745059 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759588 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759932 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759944 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759959 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759965 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759974 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.759981 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.759999 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760005 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.760026 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: E0121 14:54:41.760060 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760067 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760222 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-log" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760232 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="proxy-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760241 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" containerName="glance-httpd" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760250 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="sg-core" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760269 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-notification-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.760278 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" containerName="ceilometer-central-agent" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.761093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.768582 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.769210 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.779249 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.788827 4902 scope.go:117] "RemoveContainer" containerID="608e1b4097af22dfa5d4ac8e16f96f4559ccfac69b91f2f6b0c474e5b5b3009a" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.795330 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.795449 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.799614 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.808687 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.856135 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.869861 4902 scope.go:117] "RemoveContainer" containerID="9488c0012fa14d74a2416c6470b9e3bb2fd6546e005a8aa4d68294c775bb7bf3" Jan 21 14:54:41 crc kubenswrapper[4902]: W0121 14:54:41.882651 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff41f7d4_e15a_4fc3_afd9_5d86fe05768f.slice/crio-fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19 WatchSource:0}: Error finding container fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19: Status 404 returned error can't find the container with id fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19 Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.883857 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884209 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884278 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884416 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.884473 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.890703 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.949258 4902 scope.go:117] "RemoveContainer" containerID="2529bfc2f37257b9b0cf5337629fa086f24c54088c21213974bcbbd9f5d66189" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.987986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988133 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988217 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988284 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988297 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988321 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988387 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988426 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988507 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988653 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.988881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.992512 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.993963 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.994873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:41 crc kubenswrapper[4902]: I0121 14:54:41.999059 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.010360 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.030690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.089619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.089967 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090002 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090033 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.090185 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.091261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.091256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.093741 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.094452 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.094844 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.095577 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.111781 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.113778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"ceilometer-0\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.124404 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.314244 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5048d12c-b66b-4f2f-a706-0e2978b5f0db" path="/var/lib/kubelet/pods/5048d12c-b66b-4f2f-a706-0e2978b5f0db/volumes" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.314879 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff47e21a-75a1-4d66-b599-725966fa456e" path="/var/lib/kubelet/pods/ff47e21a-75a1-4d66-b599-725966fa456e/volumes" Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.680349 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerStarted","Data":"11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd"} Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.680690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerStarted","Data":"fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19"} Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.735436 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:42 crc kubenswrapper[4902]: I0121 14:54:42.752922 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:54:42 crc kubenswrapper[4902]: W0121 14:54:42.758404 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f98b44_f071_4c67_a176_1033550150c4.slice/crio-288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e WatchSource:0}: Error finding container 288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e: Status 404 returned error can't find the container with id 288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e Jan 21 14:54:42 crc kubenswrapper[4902]: W0121 14:54:42.765063 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4168bc0_26cf_4786_9e28_95647462c372.slice/crio-7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401 WatchSource:0}: Error finding container 7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401: Status 404 returned error can't find the container with id 7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401 Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.690269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerStarted","Data":"baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.690621 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerStarted","Data":"7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.692276 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.694106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerStarted","Data":"29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e"} Jan 21 14:54:43 crc kubenswrapper[4902]: I0121 14:54:43.716072 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.716055756 podStartE2EDuration="4.716055756s" podCreationTimestamp="2026-01-21 14:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:43.711894145 +0000 UTC m=+1245.788727184" watchObservedRunningTime="2026-01-21 14:54:43.716055756 +0000 UTC m=+1245.792888785" Jan 21 14:54:44 crc kubenswrapper[4902]: I0121 14:54:44.723773 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a"} Jan 21 14:54:44 crc kubenswrapper[4902]: I0121 14:54:44.728657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerStarted","Data":"635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0"} Jan 21 14:54:44 crc kubenswrapper[4902]: I0121 14:54:44.758291 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.758267977 podStartE2EDuration="3.758267977s" podCreationTimestamp="2026-01-21 14:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:44.747656014 +0000 UTC m=+1246.824489033" watchObservedRunningTime="2026-01-21 14:54:44.758267977 +0000 UTC m=+1246.835101006" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.381812 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.383117 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.400701 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.456961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.457114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.490412 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.491462 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.503569 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.506967 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566297 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566320 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.566354 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.567149 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.582005 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.583149 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.597908 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.600535 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"nova-api-db-create-vcplz\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.667756 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.667849 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.667878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.673382 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.674240 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.689621 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.691153 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.700146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"nova-api-6f87-account-create-update-w85cg\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.707379 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.708730 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.724378 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.725740 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.729312 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.736180 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.742455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a"} Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.775746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.776068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.779305 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.779411 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.779437 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.807521 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"nova-cell0-db-create-rkcxd\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.822319 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.828336 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.829433 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.837121 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.837450 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881812 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881893 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.881949 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.884689 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.908649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"nova-cell1-db-create-lwq2z\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.983744 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.983808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.984228 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.984348 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:45 crc kubenswrapper[4902]: I0121 14:54:45.984691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.011489 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.014503 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"nova-cell0-7df7-account-create-update-lmnmw\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.046675 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.086093 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.086143 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.087268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.138764 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.225085 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"nova-cell1-7ca2-account-create-update-tz26x\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.326663 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.456585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.496899 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.769693 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.790183 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerStarted","Data":"70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.790222 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerStarted","Data":"790e70b4679b9c689a659e471e0ae68223287ae93a05a9119c36b9badf4b2802"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.805151 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vcplz" podStartSLOduration=1.805131776 podStartE2EDuration="1.805131776s" podCreationTimestamp="2026-01-21 14:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:46.802991659 +0000 UTC m=+1248.879824688" watchObservedRunningTime="2026-01-21 14:54:46.805131776 +0000 UTC m=+1248.881964815" Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.816799 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.819374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-w85cg" event={"ID":"ab17d58c-9dc5-4a20-8ca7-3d06256080c3","Type":"ContainerStarted","Data":"21b2e92c10a22b6aaa2cb8e856bbc1e0f6bd360696dcd90517a4f77ba803ad6c"} Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.850202 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:54:46 crc kubenswrapper[4902]: I0121 14:54:46.890371 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:54:46 crc kubenswrapper[4902]: W0121 14:54:46.951784 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfed335_1a3f_4e42_b593_e5958039dadc.slice/crio-99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d WatchSource:0}: Error finding container 99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d: Status 404 returned error can't find the container with id 99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.032742 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:54:47 crc kubenswrapper[4902]: W0121 14:54:47.041059 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6baf26e6_f197_4ae1_b7a5_40a1147e3276.slice/crio-267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1 WatchSource:0}: Error finding container 267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1: Status 404 returned error can't find the container with id 267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1 Jan 21 14:54:47 crc kubenswrapper[4902]: E0121 14:54:47.391598 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab17d58c_9dc5_4a20_8ca7_3d06256080c3.slice/crio-conmon-7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab17d58c_9dc5_4a20_8ca7_3d06256080c3.slice/crio-7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod249b1461_ed19_4572_b1e6_c5c44cfa9145.slice/crio-616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.770078 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.770351 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.829914 4902 generic.go:334] "Generic (PLEG): container finished" podID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerID="40c9945717c6eed6957b84780ec6e3c2301b7187e2ec047124eab88f68c26607" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.829994 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lwq2z" event={"ID":"3dfed335-1a3f-4e42-b593-e5958039dadc","Type":"ContainerDied","Data":"40c9945717c6eed6957b84780ec6e3c2301b7187e2ec047124eab88f68c26607"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.830119 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lwq2z" event={"ID":"3dfed335-1a3f-4e42-b593-e5958039dadc","Type":"ContainerStarted","Data":"99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.831818 4902 generic.go:334] "Generic (PLEG): container finished" podID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerID="70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.831909 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerDied","Data":"70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.833329 4902 generic.go:334] "Generic (PLEG): container finished" podID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerID="670dee5a8d2ff2f59f49370b068ca6bd9c9b2aa28c545aa7b4fee5f803108537" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.833413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkcxd" event={"ID":"acb2fcf0-980e-418a-b776-ec7836101d6b","Type":"ContainerDied","Data":"670dee5a8d2ff2f59f49370b068ca6bd9c9b2aa28c545aa7b4fee5f803108537"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.833464 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkcxd" event={"ID":"acb2fcf0-980e-418a-b776-ec7836101d6b","Type":"ContainerStarted","Data":"7b64a5e748d791311599d32f08da92ac54de356948a0feec51f5f71dca33fe52"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.838546 4902 generic.go:334] "Generic (PLEG): container finished" podID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerID="184ed0c03e177484d5129302f45e661a1a2c46bd5bca5080444db5e2821f6ed4" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.838636 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" event={"ID":"6baf26e6-f197-4ae1-b7a5-40a1147e3276","Type":"ContainerDied","Data":"184ed0c03e177484d5129302f45e661a1a2c46bd5bca5080444db5e2821f6ed4"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.838694 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" event={"ID":"6baf26e6-f197-4ae1-b7a5-40a1147e3276","Type":"ContainerStarted","Data":"267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.841098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerStarted","Data":"2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.841250 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.842684 4902 generic.go:334] "Generic (PLEG): container finished" podID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerID="616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.842718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" event={"ID":"249b1461-ed19-4572-b1e6-c5c44cfa9145","Type":"ContainerDied","Data":"616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.842748 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" event={"ID":"249b1461-ed19-4572-b1e6-c5c44cfa9145","Type":"ContainerStarted","Data":"b9c98a428978a8f247f96df66315edc95b73d5b134f4da3dfc12049bd1aa9848"} Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.848414 4902 generic.go:334] "Generic (PLEG): container finished" podID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerID="7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755" exitCode=0 Jan 21 14:54:47 crc kubenswrapper[4902]: I0121 14:54:47.848470 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-w85cg" event={"ID":"ab17d58c-9dc5-4a20-8ca7-3d06256080c3","Type":"ContainerDied","Data":"7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755"} Jan 21 14:54:48 crc kubenswrapper[4902]: I0121 14:54:48.061388 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.412573071 podStartE2EDuration="7.061371836s" podCreationTimestamp="2026-01-21 14:54:41 +0000 UTC" firstStartedPulling="2026-01-21 14:54:42.761333549 +0000 UTC m=+1244.838166578" lastFinishedPulling="2026-01-21 14:54:47.410132314 +0000 UTC m=+1249.486965343" observedRunningTime="2026-01-21 14:54:48.029694041 +0000 UTC m=+1250.106527070" watchObservedRunningTime="2026-01-21 14:54:48.061371836 +0000 UTC m=+1250.138204865" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.278846 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.397344 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") pod \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.397414 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") pod \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\" (UID: \"ab17d58c-9dc5-4a20-8ca7-3d06256080c3\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.400474 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab17d58c-9dc5-4a20-8ca7-3d06256080c3" (UID: "ab17d58c-9dc5-4a20-8ca7-3d06256080c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.409291 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx" (OuterVolumeSpecName: "kube-api-access-9ttmx") pod "ab17d58c-9dc5-4a20-8ca7-3d06256080c3" (UID: "ab17d58c-9dc5-4a20-8ca7-3d06256080c3"). InnerVolumeSpecName "kube-api-access-9ttmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.501261 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttmx\" (UniqueName: \"kubernetes.io/projected/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-kube-api-access-9ttmx\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.501294 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab17d58c-9dc5-4a20-8ca7-3d06256080c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.571317 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.580317 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.590333 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.596980 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602376 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") pod \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602402 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") pod \"249b1461-ed19-4572-b1e6-c5c44cfa9145\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") pod \"035bb03b-fb8e-4b30-a30f-bfde97b03291\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") pod \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\" (UID: \"6baf26e6-f197-4ae1-b7a5-40a1147e3276\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602495 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") pod \"acb2fcf0-980e-418a-b776-ec7836101d6b\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602524 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") pod \"035bb03b-fb8e-4b30-a30f-bfde97b03291\" (UID: \"035bb03b-fb8e-4b30-a30f-bfde97b03291\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") pod \"acb2fcf0-980e-418a-b776-ec7836101d6b\" (UID: \"acb2fcf0-980e-418a-b776-ec7836101d6b\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.602588 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") pod \"249b1461-ed19-4572-b1e6-c5c44cfa9145\" (UID: \"249b1461-ed19-4572-b1e6-c5c44cfa9145\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.603448 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "249b1461-ed19-4572-b1e6-c5c44cfa9145" (UID: "249b1461-ed19-4572-b1e6-c5c44cfa9145"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.604401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6baf26e6-f197-4ae1-b7a5-40a1147e3276" (UID: "6baf26e6-f197-4ae1-b7a5-40a1147e3276"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.604830 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "035bb03b-fb8e-4b30-a30f-bfde97b03291" (UID: "035bb03b-fb8e-4b30-a30f-bfde97b03291"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.604922 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acb2fcf0-980e-418a-b776-ec7836101d6b" (UID: "acb2fcf0-980e-418a-b776-ec7836101d6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.608278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82" (OuterVolumeSpecName: "kube-api-access-c6z82") pod "acb2fcf0-980e-418a-b776-ec7836101d6b" (UID: "acb2fcf0-980e-418a-b776-ec7836101d6b"). InnerVolumeSpecName "kube-api-access-c6z82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.608622 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47" (OuterVolumeSpecName: "kube-api-access-g7b47") pod "035bb03b-fb8e-4b30-a30f-bfde97b03291" (UID: "035bb03b-fb8e-4b30-a30f-bfde97b03291"). InnerVolumeSpecName "kube-api-access-g7b47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.609188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7" (OuterVolumeSpecName: "kube-api-access-z2gb7") pod "249b1461-ed19-4572-b1e6-c5c44cfa9145" (UID: "249b1461-ed19-4572-b1e6-c5c44cfa9145"). InnerVolumeSpecName "kube-api-access-z2gb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.609759 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.615763 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc" (OuterVolumeSpecName: "kube-api-access-vjctc") pod "6baf26e6-f197-4ae1-b7a5-40a1147e3276" (UID: "6baf26e6-f197-4ae1-b7a5-40a1147e3276"). InnerVolumeSpecName "kube-api-access-vjctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.703732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") pod \"3dfed335-1a3f-4e42-b593-e5958039dadc\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.703898 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") pod \"3dfed335-1a3f-4e42-b593-e5958039dadc\" (UID: \"3dfed335-1a3f-4e42-b593-e5958039dadc\") " Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704520 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7b47\" (UniqueName: \"kubernetes.io/projected/035bb03b-fb8e-4b30-a30f-bfde97b03291-kube-api-access-g7b47\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704561 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb2fcf0-980e-418a-b776-ec7836101d6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704576 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/249b1461-ed19-4572-b1e6-c5c44cfa9145-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704567 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dfed335-1a3f-4e42-b593-e5958039dadc" (UID: "3dfed335-1a3f-4e42-b593-e5958039dadc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704588 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6baf26e6-f197-4ae1-b7a5-40a1147e3276-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704636 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2gb7\" (UniqueName: \"kubernetes.io/projected/249b1461-ed19-4572-b1e6-c5c44cfa9145-kube-api-access-z2gb7\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704649 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035bb03b-fb8e-4b30-a30f-bfde97b03291-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704660 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjctc\" (UniqueName: \"kubernetes.io/projected/6baf26e6-f197-4ae1-b7a5-40a1147e3276-kube-api-access-vjctc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.704670 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6z82\" (UniqueName: \"kubernetes.io/projected/acb2fcf0-980e-418a-b776-ec7836101d6b-kube-api-access-c6z82\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.706527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v" (OuterVolumeSpecName: "kube-api-access-xd96v") pod "3dfed335-1a3f-4e42-b593-e5958039dadc" (UID: "3dfed335-1a3f-4e42-b593-e5958039dadc"). InnerVolumeSpecName "kube-api-access-xd96v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.806459 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd96v\" (UniqueName: \"kubernetes.io/projected/3dfed335-1a3f-4e42-b593-e5958039dadc-kube-api-access-xd96v\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.806486 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfed335-1a3f-4e42-b593-e5958039dadc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.866834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vcplz" event={"ID":"035bb03b-fb8e-4b30-a30f-bfde97b03291","Type":"ContainerDied","Data":"790e70b4679b9c689a659e471e0ae68223287ae93a05a9119c36b9badf4b2802"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.866865 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vcplz" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.866874 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790e70b4679b9c689a659e471e0ae68223287ae93a05a9119c36b9badf4b2802" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.868367 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-rkcxd" event={"ID":"acb2fcf0-980e-418a-b776-ec7836101d6b","Type":"ContainerDied","Data":"7b64a5e748d791311599d32f08da92ac54de356948a0feec51f5f71dca33fe52"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.868406 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b64a5e748d791311599d32f08da92ac54de356948a0feec51f5f71dca33fe52" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.868463 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-rkcxd" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.869694 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" event={"ID":"6baf26e6-f197-4ae1-b7a5-40a1147e3276","Type":"ContainerDied","Data":"267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.869716 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ca2-account-create-update-tz26x" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.869733 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267ecb3bc9a537fdb17a975468da1b4d571e614a6cdf56cc7b325f1ea8497bd1" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.871098 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" event={"ID":"249b1461-ed19-4572-b1e6-c5c44cfa9145","Type":"ContainerDied","Data":"b9c98a428978a8f247f96df66315edc95b73d5b134f4da3dfc12049bd1aa9848"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.871143 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c98a428978a8f247f96df66315edc95b73d5b134f4da3dfc12049bd1aa9848" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.871119 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-lmnmw" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.872442 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-w85cg" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.872421 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-w85cg" event={"ID":"ab17d58c-9dc5-4a20-8ca7-3d06256080c3","Type":"ContainerDied","Data":"21b2e92c10a22b6aaa2cb8e856bbc1e0f6bd360696dcd90517a4f77ba803ad6c"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.872699 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b2e92c10a22b6aaa2cb8e856bbc1e0f6bd360696dcd90517a4f77ba803ad6c" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.873834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lwq2z" event={"ID":"3dfed335-1a3f-4e42-b593-e5958039dadc","Type":"ContainerDied","Data":"99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d"} Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.873863 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d347ec436ffb4d87dc2fd1fd62a807257c4da75f65f961edab01bd197a4a4d" Jan 21 14:54:49 crc kubenswrapper[4902]: I0121 14:54:49.874021 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lwq2z" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.076960 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.077389 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.109107 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.121509 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.882077 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:54:50 crc kubenswrapper[4902]: I0121 14:54:50.882133 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.129946 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130375 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130410 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130419 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130441 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130449 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130470 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130478 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130499 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130508 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: E0121 14:54:51.130524 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130532 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130719 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130744 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130752 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" containerName="mariadb-database-create" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130763 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130775 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.130786 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" containerName="mariadb-account-create-update" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.131349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.134727 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4sv6b" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.134891 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.134893 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.148564 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.231577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333705 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.333851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.340586 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.342740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.343667 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.373680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"nova-cell0-conductor-db-sync-2kxkv\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:51 crc kubenswrapper[4902]: I0121 14:54:51.453449 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.019366 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.113101 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.113155 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.157623 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.171891 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.907210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerStarted","Data":"7a95c2bf8aaa1b14521dd5e9e1895d33696aae4fd5473b52aeb0bdb216066121"} Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.907452 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:52 crc kubenswrapper[4902]: I0121 14:54:52.907614 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:53 crc kubenswrapper[4902]: I0121 14:54:53.367708 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:54:53 crc kubenswrapper[4902]: I0121 14:54:53.367839 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:54:53 crc kubenswrapper[4902]: I0121 14:54:53.528088 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.920834 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.921135 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.975921 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976266 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" containerID="cri-o://34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a" gracePeriod=30 Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976291 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" containerID="cri-o://2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d" gracePeriod=30 Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976346 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" containerID="cri-o://1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da" gracePeriod=30 Jan 21 14:54:54 crc kubenswrapper[4902]: I0121 14:54:54.976332 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" containerID="cri-o://87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a" gracePeriod=30 Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.168488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.198219 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932424 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d" exitCode=0 Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932654 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da" exitCode=2 Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932496 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d"} Jan 21 14:54:55 crc kubenswrapper[4902]: I0121 14:54:55.932701 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da"} Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956255 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a" exitCode=0 Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956490 4902 generic.go:334] "Generic (PLEG): container finished" podID="62f98b44-f071-4c67-a176-1033550150c4" containerID="34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a" exitCode=0 Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a"} Jan 21 14:54:56 crc kubenswrapper[4902]: I0121 14:54:56.956593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a"} Jan 21 14:55:03 crc kubenswrapper[4902]: I0121 14:55:03.985989 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.033295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62f98b44-f071-4c67-a176-1033550150c4","Type":"ContainerDied","Data":"288e100dc9d7775cb50fbd2d74baf68d2b7ce9f194f99cce744864657ed7517e"} Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.033331 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.033582 4902 scope.go:117] "RemoveContainer" containerID="2747adc30292da8bb9ef5316d34e01fb5b9994182a7d3c00899398caa602de5d" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.059704 4902 scope.go:117] "RemoveContainer" containerID="1efb79b1ee63c60deba157f0a412f37783a993f9bcdb9443447b3f3f4120a6da" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070505 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070572 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070690 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.070764 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") pod \"62f98b44-f071-4c67-a176-1033550150c4\" (UID: \"62f98b44-f071-4c67-a176-1033550150c4\") " Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.071481 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.071758 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.089255 4902 scope.go:117] "RemoveContainer" containerID="87f4452a0aa396a56b842123857ff8f77a41868b23cb4fe3ecb0ab8734644f5a" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.089275 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts" (OuterVolumeSpecName: "scripts") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.089473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg" (OuterVolumeSpecName: "kube-api-access-fvsbg") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "kube-api-access-fvsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.097584 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.118444 4902 scope.go:117] "RemoveContainer" containerID="34b69c1a0b66c8657c3d6821b2c584dbfa27e44857ee98ee3f20ff8b752bed3a" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.146873 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.165833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data" (OuterVolumeSpecName: "config-data") pod "62f98b44-f071-4c67-a176-1033550150c4" (UID: "62f98b44-f071-4c67-a176-1033550150c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.172943 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvsbg\" (UniqueName: \"kubernetes.io/projected/62f98b44-f071-4c67-a176-1033550150c4-kube-api-access-fvsbg\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.172980 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.172990 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173002 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173011 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173019 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62f98b44-f071-4c67-a176-1033550150c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.173027 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62f98b44-f071-4c67-a176-1033550150c4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.372071 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.386465 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.400337 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414721 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414765 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414778 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414784 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414811 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414817 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" Jan 21 14:55:04 crc kubenswrapper[4902]: E0121 14:55:04.414826 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.414832 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415082 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-notification-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415099 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="ceilometer-central-agent" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415110 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="sg-core" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.415120 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f98b44-f071-4c67-a176-1033550150c4" containerName="proxy-httpd" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.416779 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.416869 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.422922 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.423304 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.579969 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580078 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580094 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580125 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580141 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.580166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681621 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681745 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681808 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681825 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.681860 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.682856 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.682863 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.686914 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.687252 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.688383 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.691150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.705295 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"ceilometer-0\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " pod="openstack/ceilometer-0" Jan 21 14:55:04 crc kubenswrapper[4902]: I0121 14:55:04.734286 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:05 crc kubenswrapper[4902]: I0121 14:55:05.047192 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerStarted","Data":"e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281"} Jan 21 14:55:05 crc kubenswrapper[4902]: I0121 14:55:05.066983 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" podStartSLOduration=2.254203863 podStartE2EDuration="14.066963304s" podCreationTimestamp="2026-01-21 14:54:51 +0000 UTC" firstStartedPulling="2026-01-21 14:54:52.023642728 +0000 UTC m=+1254.100475757" lastFinishedPulling="2026-01-21 14:55:03.836402169 +0000 UTC m=+1265.913235198" observedRunningTime="2026-01-21 14:55:05.061436167 +0000 UTC m=+1267.138269186" watchObservedRunningTime="2026-01-21 14:55:05.066963304 +0000 UTC m=+1267.143796333" Jan 21 14:55:05 crc kubenswrapper[4902]: I0121 14:55:05.197753 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:05 crc kubenswrapper[4902]: W0121 14:55:05.208445 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8167d9b9_ec38_488f_90e8_d5e11a6b75be.slice/crio-46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b WatchSource:0}: Error finding container 46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b: Status 404 returned error can't find the container with id 46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b Jan 21 14:55:06 crc kubenswrapper[4902]: I0121 14:55:06.058644 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} Jan 21 14:55:06 crc kubenswrapper[4902]: I0121 14:55:06.058922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b"} Jan 21 14:55:06 crc kubenswrapper[4902]: I0121 14:55:06.307726 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f98b44-f071-4c67-a176-1033550150c4" path="/var/lib/kubelet/pods/62f98b44-f071-4c67-a176-1033550150c4/volumes" Jan 21 14:55:07 crc kubenswrapper[4902]: I0121 14:55:07.076925 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} Jan 21 14:55:09 crc kubenswrapper[4902]: I0121 14:55:09.118666 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} Jan 21 14:55:10 crc kubenswrapper[4902]: I0121 14:55:10.133854 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerStarted","Data":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} Jan 21 14:55:10 crc kubenswrapper[4902]: I0121 14:55:10.134192 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:55:10 crc kubenswrapper[4902]: I0121 14:55:10.162957 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.008105399 podStartE2EDuration="6.162936148s" podCreationTimestamp="2026-01-21 14:55:04 +0000 UTC" firstStartedPulling="2026-01-21 14:55:05.21111288 +0000 UTC m=+1267.287945909" lastFinishedPulling="2026-01-21 14:55:09.365943629 +0000 UTC m=+1271.442776658" observedRunningTime="2026-01-21 14:55:10.159676461 +0000 UTC m=+1272.236509500" watchObservedRunningTime="2026-01-21 14:55:10.162936148 +0000 UTC m=+1272.239769177" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.210237 4902 generic.go:334] "Generic (PLEG): container finished" podID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerID="e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281" exitCode=0 Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.210397 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerDied","Data":"e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281"} Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.770398 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.770719 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.770847 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.771743 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:55:17 crc kubenswrapper[4902]: I0121 14:55:17.771899 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369" gracePeriod=600 Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223249 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369" exitCode=0 Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369"} Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223611 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61"} Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.223676 4902 scope.go:117] "RemoveContainer" containerID="f9ca57ec1458d1c5cf7c9248bedd6ee378b9620abbe566738ff33d6096aeb8f1" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.557946 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732446 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732525 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732752 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.732784 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") pod \"f6ab900b-a76f-495c-a309-f597e2d835a8\" (UID: \"f6ab900b-a76f-495c-a309-f597e2d835a8\") " Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.743680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts" (OuterVolumeSpecName: "scripts") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.745994 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8" (OuterVolumeSpecName: "kube-api-access-vr5w8") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "kube-api-access-vr5w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.761781 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data" (OuterVolumeSpecName: "config-data") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.762355 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6ab900b-a76f-495c-a309-f597e2d835a8" (UID: "f6ab900b-a76f-495c-a309-f597e2d835a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834455 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr5w8\" (UniqueName: \"kubernetes.io/projected/f6ab900b-a76f-495c-a309-f597e2d835a8-kube-api-access-vr5w8\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834652 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834732 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4902]: I0121 14:55:18.834787 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ab900b-a76f-495c-a309-f597e2d835a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.237232 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.237220 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2kxkv" event={"ID":"f6ab900b-a76f-495c-a309-f597e2d835a8","Type":"ContainerDied","Data":"7a95c2bf8aaa1b14521dd5e9e1895d33696aae4fd5473b52aeb0bdb216066121"} Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.237428 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a95c2bf8aaa1b14521dd5e9e1895d33696aae4fd5473b52aeb0bdb216066121" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.368448 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:55:19 crc kubenswrapper[4902]: E0121 14:55:19.369076 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerName="nova-cell0-conductor-db-sync" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.369172 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerName="nova-cell0-conductor-db-sync" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.369401 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" containerName="nova-cell0-conductor-db-sync" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.370018 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.372960 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4sv6b" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.372968 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.422398 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.446720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.446990 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.447125 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.548984 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.549090 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.549210 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.555299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.555311 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.575699 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"nova-cell0-conductor-0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:19 crc kubenswrapper[4902]: I0121 14:55:19.701389 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:20 crc kubenswrapper[4902]: I0121 14:55:20.144350 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:55:20 crc kubenswrapper[4902]: I0121 14:55:20.254777 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerStarted","Data":"4ffea13c5b1ca8a19fa0ab7ab117654ce080a9b7f7c854db7559f017b9ca3c40"} Jan 21 14:55:21 crc kubenswrapper[4902]: I0121 14:55:21.265272 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerStarted","Data":"a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe"} Jan 21 14:55:21 crc kubenswrapper[4902]: I0121 14:55:21.265625 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:21 crc kubenswrapper[4902]: I0121 14:55:21.293924 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.293904114 podStartE2EDuration="2.293904114s" podCreationTimestamp="2026-01-21 14:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:21.282772087 +0000 UTC m=+1283.359605116" watchObservedRunningTime="2026-01-21 14:55:21.293904114 +0000 UTC m=+1283.370737163" Jan 21 14:55:29 crc kubenswrapper[4902]: I0121 14:55:29.748436 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.260825 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.262240 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.265080 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.266291 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.276465 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336534 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336573 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.336679 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.438657 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.438979 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.439021 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.439098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.445718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.449602 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.453745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.483314 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.485076 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.487370 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.488846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.492632 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.495019 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.498855 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"nova-cell0-cell-mapping-hlnnm\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.499254 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.626611 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633763 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633890 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.633978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.634060 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.634137 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.634175 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.665264 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743498 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743785 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743857 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.743969 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.744067 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.744155 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.744198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.746159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.749683 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.760368 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.761093 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.762834 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.782852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.783897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"nova-metadata-0\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.805775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"nova-api-0\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.849852 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.871762 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.879091 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.880267 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.918753 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.921757 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.935501 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.936680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.940866 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947565 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947610 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947631 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947662 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947687 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.947775 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.948906 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.968370 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.970756 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:30 crc kubenswrapper[4902]: I0121 14:55:30.988275 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049882 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049938 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049976 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.049992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050010 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050095 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050154 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050170 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050190 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.050204 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.060727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.061076 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.062571 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.062641 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.069757 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"nova-scheduler-0\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.078906 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153006 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153100 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153147 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153203 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.153269 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.155162 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.155321 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.155778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.156092 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.156313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.186482 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"dnsmasq-dns-647df7b8c5-2m4b6\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.254945 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.278105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.299544 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.326324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.391892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerStarted","Data":"7fb36f9b5a0756160aa3e55d39cd3770f84453c05c73ff6f985ac88cc53732b4"} Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.488817 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.566144 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.636742 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.857348 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.858554 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.863129 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.863185 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.866000 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.934001 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:31 crc kubenswrapper[4902]: W0121 14:55:31.937656 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabfbeed_c979_4978_bdeb_68ac2c9023a1.slice/crio-d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d WatchSource:0}: Error finding container d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d: Status 404 returned error can't find the container with id d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.998800 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.998940 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.998971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:31 crc kubenswrapper[4902]: I0121 14:55:31.999000 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: W0121 14:55:32.057423 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38f0c216_daa1_42c6_9105_11ad7d5fc686.slice/crio-39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82 WatchSource:0}: Error finding container 39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82: Status 404 returned error can't find the container with id 39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82 Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.065931 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.074837 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.100630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.100869 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.100954 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.102521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.104391 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.107535 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.108679 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.116875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"nova-cell1-conductor-db-sync-lrj4d\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.181819 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.409393 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerStarted","Data":"023c89d0fbc7b0fc5efc71aecbfa4d8d80f6d918ae1bd1523efe2599e8dc31eb"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.411529 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerStarted","Data":"80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.413893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerStarted","Data":"34f32efff8c1f6aabcc0c5371906f0935d5fd2d86c65b2814e1ce5ed501c9460"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.415768 4902 generic.go:334] "Generic (PLEG): container finished" podID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerID="e33abee5a4d9568ebeebc43b93ca969e5d4b5cadc5a0cff7461433d918dfb71d" exitCode=0 Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.415815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerDied","Data":"e33abee5a4d9568ebeebc43b93ca969e5d4b5cadc5a0cff7461433d918dfb71d"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.415832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerStarted","Data":"39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.419960 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerStarted","Data":"20c0da8ce9148a9ce1d2bbb934c0cba1985f7cac4f00b74da4ba453452a4725d"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.433181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerStarted","Data":"d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d"} Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.439486 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hlnnm" podStartSLOduration=2.439464293 podStartE2EDuration="2.439464293s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:32.426191099 +0000 UTC m=+1294.503024128" watchObservedRunningTime="2026-01-21 14:55:32.439464293 +0000 UTC m=+1294.516297332" Jan 21 14:55:32 crc kubenswrapper[4902]: I0121 14:55:32.663346 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:55:32 crc kubenswrapper[4902]: W0121 14:55:32.684640 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169597ed_1e1f_490a_8d17_0d6520ae39d1.slice/crio-0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af WatchSource:0}: Error finding container 0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af: Status 404 returned error can't find the container with id 0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.451743 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerStarted","Data":"33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc"} Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.452195 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.457096 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerStarted","Data":"f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300"} Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.457178 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerStarted","Data":"0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af"} Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.478074 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" podStartSLOduration=3.478036107 podStartE2EDuration="3.478036107s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:33.471886193 +0000 UTC m=+1295.548719212" watchObservedRunningTime="2026-01-21 14:55:33.478036107 +0000 UTC m=+1295.554869136" Jan 21 14:55:33 crc kubenswrapper[4902]: I0121 14:55:33.501412 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" podStartSLOduration=2.5013918200000003 podStartE2EDuration="2.50139182s" podCreationTimestamp="2026-01-21 14:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:33.494366242 +0000 UTC m=+1295.571199271" watchObservedRunningTime="2026-01-21 14:55:33.50139182 +0000 UTC m=+1295.578224849" Jan 21 14:55:34 crc kubenswrapper[4902]: I0121 14:55:34.437466 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:34 crc kubenswrapper[4902]: I0121 14:55:34.449539 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:34 crc kubenswrapper[4902]: I0121 14:55:34.746719 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.485930 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerStarted","Data":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.486704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerStarted","Data":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.486183 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" containerID="cri-o://d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" gracePeriod=30 Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.486105 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" containerID="cri-o://1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" gracePeriod=30 Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.489684 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerStarted","Data":"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.489967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerStarted","Data":"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.497537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerStarted","Data":"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.497670 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" gracePeriod=30 Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.512350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerStarted","Data":"0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7"} Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.522657 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.904092017 podStartE2EDuration="6.52263796s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:31.56583071 +0000 UTC m=+1293.642663739" lastFinishedPulling="2026-01-21 14:55:35.184376653 +0000 UTC m=+1297.261209682" observedRunningTime="2026-01-21 14:55:36.51024468 +0000 UTC m=+1298.587077709" watchObservedRunningTime="2026-01-21 14:55:36.52263796 +0000 UTC m=+1298.599470989" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.533951 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.052426564 podStartE2EDuration="6.533935622s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:31.695337554 +0000 UTC m=+1293.772170583" lastFinishedPulling="2026-01-21 14:55:35.176846612 +0000 UTC m=+1297.253679641" observedRunningTime="2026-01-21 14:55:36.533103289 +0000 UTC m=+1298.609936318" watchObservedRunningTime="2026-01-21 14:55:36.533935622 +0000 UTC m=+1298.610768651" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.553499 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.440434914 podStartE2EDuration="6.553483503s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:32.065150289 +0000 UTC m=+1294.141983318" lastFinishedPulling="2026-01-21 14:55:35.178198878 +0000 UTC m=+1297.255031907" observedRunningTime="2026-01-21 14:55:36.547924915 +0000 UTC m=+1298.624757944" watchObservedRunningTime="2026-01-21 14:55:36.553483503 +0000 UTC m=+1298.630316532" Jan 21 14:55:36 crc kubenswrapper[4902]: I0121 14:55:36.576584 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.339683406 podStartE2EDuration="6.576567949s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="2026-01-21 14:55:31.939479747 +0000 UTC m=+1294.016312776" lastFinishedPulling="2026-01-21 14:55:35.17636429 +0000 UTC m=+1297.253197319" observedRunningTime="2026-01-21 14:55:36.565067322 +0000 UTC m=+1298.641900351" watchObservedRunningTime="2026-01-21 14:55:36.576567949 +0000 UTC m=+1298.653400978" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.077057 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.227884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.227948 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228026 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228156 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") pod \"3d8ede08-fd8e-4922-901c-9767821d918d\" (UID: \"3d8ede08-fd8e-4922-901c-9767821d918d\") " Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228434 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs" (OuterVolumeSpecName: "logs") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.228761 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d8ede08-fd8e-4922-901c-9767821d918d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.246192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn" (OuterVolumeSpecName: "kube-api-access-6msrn") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "kube-api-access-6msrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.275170 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data" (OuterVolumeSpecName: "config-data") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.284364 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d8ede08-fd8e-4922-901c-9767821d918d" (UID: "3d8ede08-fd8e-4922-901c-9767821d918d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.330213 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6msrn\" (UniqueName: \"kubernetes.io/projected/3d8ede08-fd8e-4922-901c-9767821d918d-kube-api-access-6msrn\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.330245 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.330255 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ede08-fd8e-4922-901c-9767821d918d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.521980 4902 generic.go:334] "Generic (PLEG): container finished" podID="3d8ede08-fd8e-4922-901c-9767821d918d" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" exitCode=0 Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522017 4902 generic.go:334] "Generic (PLEG): container finished" podID="3d8ede08-fd8e-4922-901c-9767821d918d" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" exitCode=143 Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522034 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522112 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerDied","Data":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522169 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerDied","Data":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522180 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3d8ede08-fd8e-4922-901c-9767821d918d","Type":"ContainerDied","Data":"023c89d0fbc7b0fc5efc71aecbfa4d8d80f6d918ae1bd1523efe2599e8dc31eb"} Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.522212 4902 scope.go:117] "RemoveContainer" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.546473 4902 scope.go:117] "RemoveContainer" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.559823 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.582882 4902 scope.go:117] "RemoveContainer" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.583308 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": container with ID starting with d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f not found: ID does not exist" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583335 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} err="failed to get container status \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": rpc error: code = NotFound desc = could not find container \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": container with ID starting with d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583353 4902 scope.go:117] "RemoveContainer" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.583759 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": container with ID starting with 1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991 not found: ID does not exist" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583818 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} err="failed to get container status \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": rpc error: code = NotFound desc = could not find container \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": container with ID starting with 1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991 not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.583832 4902 scope.go:117] "RemoveContainer" containerID="d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.584003 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f"} err="failed to get container status \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": rpc error: code = NotFound desc = could not find container \"d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f\": container with ID starting with d4ae477964ea1d5cd21127a987925f4ef3d28e830fc547633fdef63a18711c4f not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.584051 4902 scope.go:117] "RemoveContainer" containerID="1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.584196 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991"} err="failed to get container status \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": rpc error: code = NotFound desc = could not find container \"1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991\": container with ID starting with 1966a57bb14885289a602de17caa3bb5a979fc25c40a650cf35d8b12dfe68991 not found: ID does not exist" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.586275 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.624782 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.625339 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625363 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" Jan 21 14:55:37 crc kubenswrapper[4902]: E0121 14:55:37.625408 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625418 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625646 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-log" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.625668 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" containerName="nova-metadata-metadata" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.626849 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.629631 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.629817 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.637443 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740326 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740666 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740863 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.740924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.741240 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842684 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842731 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842812 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.842871 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.844025 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.847854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.850660 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.850715 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.860433 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"nova-metadata-0\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4902]: I0121 14:55:37.950839 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:38 crc kubenswrapper[4902]: I0121 14:55:38.363296 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8ede08-fd8e-4922-901c-9767821d918d" path="/var/lib/kubelet/pods/3d8ede08-fd8e-4922-901c-9767821d918d/volumes" Jan 21 14:55:38 crc kubenswrapper[4902]: W0121 14:55:38.560116 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ac4ce1_6229_4354_a3a7_13251f691937.slice/crio-128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0 WatchSource:0}: Error finding container 128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0: Status 404 returned error can't find the container with id 128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0 Jan 21 14:55:38 crc kubenswrapper[4902]: I0121 14:55:38.560125 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.552642 4902 generic.go:334] "Generic (PLEG): container finished" podID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerID="80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4" exitCode=0 Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.552734 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerDied","Data":"80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.555887 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerStarted","Data":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.555937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerStarted","Data":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.555946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerStarted","Data":"128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0"} Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.595982 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.596188 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" containerID="cri-o://eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" gracePeriod=30 Jan 21 14:55:39 crc kubenswrapper[4902]: I0121 14:55:39.598898 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5988764680000003 podStartE2EDuration="2.598876468s" podCreationTimestamp="2026-01-21 14:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:39.598361984 +0000 UTC m=+1301.675195013" watchObservedRunningTime="2026-01-21 14:55:39.598876468 +0000 UTC m=+1301.675709497" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.103668 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.300204 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") pod \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\" (UID: \"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113\") " Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.306033 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5" (OuterVolumeSpecName: "kube-api-access-jhfv5") pod "14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" (UID: "14fb6fe4-7f85-4d0a-b6f6-a86c152cb113"). InnerVolumeSpecName "kube-api-access-jhfv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.401927 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhfv5\" (UniqueName: \"kubernetes.io/projected/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113-kube-api-access-jhfv5\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568107 4902 generic.go:334] "Generic (PLEG): container finished" podID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" exitCode=2 Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568184 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerDied","Data":"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba"} Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"14fb6fe4-7f85-4d0a-b6f6-a86c152cb113","Type":"ContainerDied","Data":"83455c4bf3aeb7b7c76443c4b9198dde4cf810334ccfb634a4b5c17df6d13e97"} Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568224 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.568253 4902 scope.go:117] "RemoveContainer" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.598424 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.615919 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.633403 4902 scope.go:117] "RemoveContainer" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" Jan 21 14:55:40 crc kubenswrapper[4902]: E0121 14:55:40.635736 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba\": container with ID starting with eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba not found: ID does not exist" containerID="eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.636336 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba"} err="failed to get container status \"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba\": rpc error: code = NotFound desc = could not find container \"eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba\": container with ID starting with eb0bebafcdadbaf00d607d3e5937a1acb8202e28b443ba985df04ec63a99deba not found: ID does not exist" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.656113 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: E0121 14:55:40.656661 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.656680 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.656971 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" containerName="kube-state-metrics" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.657793 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.662775 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.663181 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.666977 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.817037 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.829249 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.829393 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.829642 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.860572 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.860613 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932661 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932799 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.932853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.952546 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:40 crc kubenswrapper[4902]: I0121 14:55:40.957503 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.009705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.022125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"kube-state-metrics-0\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.135777 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.237836 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.237955 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.238198 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.238246 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") pod \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\" (UID: \"86cb92f1-5dde-4389-a5c8-1c0f76b1478d\") " Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.241891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq" (OuterVolumeSpecName: "kube-api-access-2xdvq") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "kube-api-access-2xdvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.243905 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts" (OuterVolumeSpecName: "scripts") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.257157 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.257656 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.271177 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.279588 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.290511 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data" (OuterVolumeSpecName: "config-data") pod "86cb92f1-5dde-4389-a5c8-1c0f76b1478d" (UID: "86cb92f1-5dde-4389-a5c8-1c0f76b1478d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.292149 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.293551 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.302163 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.340714 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.340993 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xdvq\" (UniqueName: \"kubernetes.io/projected/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-kube-api-access-2xdvq\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.341007 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.341028 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cb92f1-5dde-4389-a5c8-1c0f76b1478d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.411970 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.412394 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" containerID="cri-o://58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94" gracePeriod=10 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.629833 4902 generic.go:334] "Generic (PLEG): container finished" podID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerID="f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300" exitCode=0 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.629946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerDied","Data":"f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300"} Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.638160 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hlnnm" event={"ID":"86cb92f1-5dde-4389-a5c8-1c0f76b1478d","Type":"ContainerDied","Data":"7fb36f9b5a0756160aa3e55d39cd3770f84453c05c73ff6f985ac88cc53732b4"} Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.638209 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb36f9b5a0756160aa3e55d39cd3770f84453c05c73ff6f985ac88cc53732b4" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.638286 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hlnnm" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.643011 4902 generic.go:334] "Generic (PLEG): container finished" podID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerID="58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94" exitCode=0 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.643200 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerDied","Data":"58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94"} Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.685165 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.783654 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.783920 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" containerID="cri-o://802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.784497 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" containerID="cri-o://adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.790828 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.791014 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" containerID="cri-o://46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.791140 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" containerID="cri-o://5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" gracePeriod=30 Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.832436 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": EOF" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.832436 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": EOF" Jan 21 14:55:41 crc kubenswrapper[4902]: I0121 14:55:41.916614 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.334720 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fb6fe4-7f85-4d0a-b6f6-a86c152cb113" path="/var/lib/kubelet/pods/14fb6fe4-7f85-4d0a-b6f6-a86c152cb113/volumes" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.335585 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.441971 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442260 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" containerID="cri-o://ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442652 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" containerID="cri-o://409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442693 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" containerID="cri-o://8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.442728 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" containerID="cri-o://f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" gracePeriod=30 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.579213 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.634152 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.697189 4902 generic.go:334] "Generic (PLEG): container finished" podID="09716208-ecef-418b-b04b-fcfad53e017d" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" exitCode=143 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.697295 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerDied","Data":"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.706934 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" event={"ID":"9c09608f-53ce-4d79-85d0-75bf0e552380","Type":"ContainerDied","Data":"d5afd22000b4d3f3c6a7c4d47e16d67d68cf4b35e698216b1393d6178399d3b9"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.706987 4902 scope.go:117] "RemoveContainer" containerID="58e43d8b58cb6c7891b30fdbcfbbaedb613b0110edddfc00c5eeec2b0d50db94" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.707150 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-94qng" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.711506 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" exitCode=2 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.711550 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.713008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerStarted","Data":"b11f8ee0923ff98e0291569b03ef8eeccd15dca9bc3a6e79246d5a184580c3ae"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714310 4902 generic.go:334] "Generic (PLEG): container finished" podID="89ac4ce1-6229-4354-a3a7-13251f691937" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" exitCode=0 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714330 4902 generic.go:334] "Generic (PLEG): container finished" podID="89ac4ce1-6229-4354-a3a7-13251f691937" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" exitCode=143 Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714465 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerDied","Data":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714852 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerDied","Data":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.714863 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"89ac4ce1-6229-4354-a3a7-13251f691937","Type":"ContainerDied","Data":"128e5578ec9eebb1bc576fc0112c902af373a1f591fbf27e6e74b44b8af2d2e0"} Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.734752 4902 scope.go:117] "RemoveContainer" containerID="054019a0d14354ed0c0e875d417095f6b26794e582d8869760a6468e64837519" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.775549 4902 scope.go:117] "RemoveContainer" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786307 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786433 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786479 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786541 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786599 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786642 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786674 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786750 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786839 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") pod \"9c09608f-53ce-4d79-85d0-75bf0e552380\" (UID: \"9c09608f-53ce-4d79-85d0-75bf0e552380\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.786933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") pod \"89ac4ce1-6229-4354-a3a7-13251f691937\" (UID: \"89ac4ce1-6229-4354-a3a7-13251f691937\") " Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.789142 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs" (OuterVolumeSpecName: "logs") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.792971 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx" (OuterVolumeSpecName: "kube-api-access-hssmx") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "kube-api-access-hssmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.794886 4902 scope.go:117] "RemoveContainer" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.805247 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r" (OuterVolumeSpecName: "kube-api-access-gkz6r") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "kube-api-access-gkz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.839072 4902 scope.go:117] "RemoveContainer" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: E0121 14:55:42.840787 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": container with ID starting with 5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83 not found: ID does not exist" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.840827 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} err="failed to get container status \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": rpc error: code = NotFound desc = could not find container \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": container with ID starting with 5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.840852 4902 scope.go:117] "RemoveContainer" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: E0121 14:55:42.841706 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": container with ID starting with 46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3 not found: ID does not exist" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.841748 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} err="failed to get container status \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": rpc error: code = NotFound desc = could not find container \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": container with ID starting with 46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.841766 4902 scope.go:117] "RemoveContainer" containerID="5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.848953 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data" (OuterVolumeSpecName: "config-data") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.849613 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83"} err="failed to get container status \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": rpc error: code = NotFound desc = could not find container \"5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83\": container with ID starting with 5584a14feaac554b8c87e39d9561016fe2dcfd5552f57586694687fd47e13d83 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.849663 4902 scope.go:117] "RemoveContainer" containerID="46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.850354 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3"} err="failed to get container status \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": rpc error: code = NotFound desc = could not find container \"46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3\": container with ID starting with 46137519f4240c7f60135f6c414ecb74e24c936e98c2fa50e4200d7569bae0f3 not found: ID does not exist" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.869789 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.873176 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.874327 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.887314 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config" (OuterVolumeSpecName: "config") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.898257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.911192 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.911593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "89ac4ce1-6229-4354-a3a7-13251f691937" (UID: "89ac4ce1-6229-4354-a3a7-13251f691937"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912359 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912390 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912404 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hssmx\" (UniqueName: \"kubernetes.io/projected/9c09608f-53ce-4d79-85d0-75bf0e552380-kube-api-access-hssmx\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912416 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkz6r\" (UniqueName: \"kubernetes.io/projected/89ac4ce1-6229-4354-a3a7-13251f691937-kube-api-access-gkz6r\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912427 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ac4ce1-6229-4354-a3a7-13251f691937-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912437 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.912448 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:42 crc kubenswrapper[4902]: I0121 14:55:42.913600 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c09608f-53ce-4d79-85d0-75bf0e552380" (UID: "9c09608f-53ce-4d79-85d0-75bf0e552380"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.016524 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.017247 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/89ac4ce1-6229-4354-a3a7-13251f691937-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.017281 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c09608f-53ce-4d79-85d0-75bf0e552380-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.068837 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.069066 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.078915 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-94qng"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.103646 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.115241 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118807 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.118840 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") pod \"169597ed-1e1f-490a-8d17-0d6520ae39d1\" (UID: \"169597ed-1e1f-490a-8d17-0d6520ae39d1\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.124420 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.124997 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125091 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125173 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125253 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125321 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerName="nova-manage" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125371 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerName="nova-manage" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125435 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="init" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125495 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="init" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125562 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125614 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.125673 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerName="nova-cell1-conductor-db-sync" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125724 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerName="nova-cell1-conductor-db-sync" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.125947 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" containerName="nova-manage" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126074 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" containerName="dnsmasq-dns" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126141 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" containerName="nova-cell1-conductor-db-sync" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126218 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-metadata" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126291 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" containerName="nova-metadata-log" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.128035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.126240 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts" (OuterVolumeSpecName: "scripts") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.132557 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.133243 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.134251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf" (OuterVolumeSpecName: "kube-api-access-7hbvf") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "kube-api-access-7hbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.163983 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.186524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.197112 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data" (OuterVolumeSpecName: "config-data") pod "169597ed-1e1f-490a-8d17-0d6520ae39d1" (UID: "169597ed-1e1f-490a-8d17-0d6520ae39d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222179 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222221 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222230 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169597ed-1e1f-490a-8d17-0d6520ae39d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.222240 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hbvf\" (UniqueName: \"kubernetes.io/projected/169597ed-1e1f-490a-8d17-0d6520ae39d1-kube-api-access-7hbvf\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325373 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325472 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325530 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.325682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427635 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.427889 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.428035 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.428463 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.433762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.434537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.448665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.451711 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"nova-metadata-0\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.468885 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.598386 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736881 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736917 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736978 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.736999 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.737033 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.737124 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") pod \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\" (UID: \"8167d9b9-ec38-488f-90e8-d5e11a6b75be\") " Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.737971 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.745487 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.750617 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751467 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751484 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751499 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751504 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751517 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751523 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: E0121 14:55:43.751535 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751540 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751710 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="proxy-httpd" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751726 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="sg-core" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751741 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-central-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.751749 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerName="ceilometer-notification-agent" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.752340 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.762356 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx" (OuterVolumeSpecName: "kube-api-access-wsdxx") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "kube-api-access-wsdxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.766835 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.782289 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts" (OuterVolumeSpecName: "scripts") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787614 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" exitCode=0 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787644 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" exitCode=0 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787653 4902 generic.go:334] "Generic (PLEG): container finished" podID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" exitCode=0 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787730 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787743 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8167d9b9-ec38-488f-90e8-d5e11a6b75be","Type":"ContainerDied","Data":"46febc8a2b0ea55e7f385549716696e2304cee994189d6c31ce4c2f325ad134b"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787769 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.787887 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.794190 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.794218 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lrj4d" event={"ID":"169597ed-1e1f-490a-8d17-0d6520ae39d1","Type":"ContainerDied","Data":"0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.794277 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0835df5efec2028b909596249b9d9f9a73e0f10cf3316792b66bed135b1a92af" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.809927 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerStarted","Data":"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723"} Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.810380 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" containerID="cri-o://0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" gracePeriod=30 Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895245 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895281 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8167d9b9-ec38-488f-90e8-d5e11a6b75be-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895296 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsdxx\" (UniqueName: \"kubernetes.io/projected/8167d9b9-ec38-488f-90e8-d5e11a6b75be-kube-api-access-wsdxx\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.895315 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.900309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.401914252 podStartE2EDuration="3.900289897s" podCreationTimestamp="2026-01-21 14:55:40 +0000 UTC" firstStartedPulling="2026-01-21 14:55:41.916435488 +0000 UTC m=+1303.993268517" lastFinishedPulling="2026-01-21 14:55:42.414811133 +0000 UTC m=+1304.491644162" observedRunningTime="2026-01-21 14:55:43.882388769 +0000 UTC m=+1305.959221788" watchObservedRunningTime="2026-01-21 14:55:43.900289897 +0000 UTC m=+1305.977122926" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.910157 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:43 crc kubenswrapper[4902]: I0121 14:55:43.988252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data" (OuterVolumeSpecName: "config-data") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.002720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.002803 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.002849 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.003001 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.003014 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.008204 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.032764 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.103606 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.104124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.104187 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.104225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.107280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8167d9b9-ec38-488f-90e8-d5e11a6b75be" (UID: "8167d9b9-ec38-488f-90e8-d5e11a6b75be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.108713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.109802 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.122938 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"nova-cell1-conductor-0\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.132881 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.164055 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.164959 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.164990 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} err="failed to get container status \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165011 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.165295 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165321 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} err="failed to get container status \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165340 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.165620 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165643 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} err="failed to get container status \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165664 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: E0121 14:55:44.165932 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165952 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} err="failed to get container status \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.165965 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.166237 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} err="failed to get container status \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.166254 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.167688 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} err="failed to get container status \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.167712 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168199 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} err="failed to get container status \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168236 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168621 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} err="failed to get container status \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168640 4902 scope.go:117] "RemoveContainer" containerID="409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168831 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3"} err="failed to get container status \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": rpc error: code = NotFound desc = could not find container \"409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3\": container with ID starting with 409c88d9b9379f3e76465113a8bf530daef29a53869695208062deb32d90d0f3 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.168847 4902 scope.go:117] "RemoveContainer" containerID="8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.169082 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35"} err="failed to get container status \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": rpc error: code = NotFound desc = could not find container \"8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35\": container with ID starting with 8ade079fa6d84a49e3b93844f7e55ef42f845833600dd9f2ffa2ac1781652c35 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.169110 4902 scope.go:117] "RemoveContainer" containerID="f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.170996 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e"} err="failed to get container status \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": rpc error: code = NotFound desc = could not find container \"f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e\": container with ID starting with f5c03f4b13dffa60bfcb1b4f3e8739d4b1c817c7be143bcf3ce1e43d926fc15e not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.171036 4902 scope.go:117] "RemoveContainer" containerID="ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.171313 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1"} err="failed to get container status \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": rpc error: code = NotFound desc = could not find container \"ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1\": container with ID starting with ebe78eee69101ef5d037106d69cc5c98139bbbe7486a19b5fe808a65065c26d1 not found: ID does not exist" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.208773 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8167d9b9-ec38-488f-90e8-d5e11a6b75be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.277464 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.307639 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ac4ce1-6229-4354-a3a7-13251f691937" path="/var/lib/kubelet/pods/89ac4ce1-6229-4354-a3a7-13251f691937/volumes" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.308366 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c09608f-53ce-4d79-85d0-75bf0e552380" path="/var/lib/kubelet/pods/9c09608f-53ce-4d79-85d0-75bf0e552380/volumes" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.498470 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.542106 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.563911 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.570219 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.573609 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.573672 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.573772 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.575355 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.722447 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723602 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723714 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723827 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.723910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825453 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825487 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825518 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825599 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825641 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825694 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.825715 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.827841 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.828227 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.838594 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842447 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerStarted","Data":"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675"} Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842591 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842609 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerStarted","Data":"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7"} Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.842621 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerStarted","Data":"420d4e5dbc151ab2860e03ff284c833763ae6b775900cb8f0097accb8dfdab8c"} Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.844492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.844650 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.845721 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.846489 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.847918 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " pod="openstack/ceilometer-0" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.880487 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.880275478 podStartE2EDuration="1.880275478s" podCreationTimestamp="2026-01-21 14:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:44.872162011 +0000 UTC m=+1306.948995040" watchObservedRunningTime="2026-01-21 14:55:44.880275478 +0000 UTC m=+1306.957108507" Jan 21 14:55:44 crc kubenswrapper[4902]: I0121 14:55:44.914257 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:45 crc kubenswrapper[4902]: W0121 14:55:45.395615 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46fd9f2d_9b3f_46b4_9e16_8d0629431b8c.slice/crio-958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c WatchSource:0}: Error finding container 958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c: Status 404 returned error can't find the container with id 958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.397735 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.849993 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerStarted","Data":"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae"} Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.850057 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerStarted","Data":"b81adfeafc100f247345bb4dc1ec0bbf1a637bdabc4a363633412eb4f663c5f6"} Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.850369 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.853543 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c"} Jan 21 14:55:45 crc kubenswrapper[4902]: I0121 14:55:45.874428 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.874409986 podStartE2EDuration="2.874409986s" podCreationTimestamp="2026-01-21 14:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:45.870407229 +0000 UTC m=+1307.947240258" watchObservedRunningTime="2026-01-21 14:55:45.874409986 +0000 UTC m=+1307.951243015" Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.257917 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.259456 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.260854 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:46 crc kubenswrapper[4902]: E0121 14:55:46.260892 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:46 crc kubenswrapper[4902]: I0121 14:55:46.306563 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8167d9b9-ec38-488f-90e8-d5e11a6b75be" path="/var/lib/kubelet/pods/8167d9b9-ec38-488f-90e8-d5e11a6b75be/volumes" Jan 21 14:55:46 crc kubenswrapper[4902]: I0121 14:55:46.866059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998"} Jan 21 14:55:47 crc kubenswrapper[4902]: I0121 14:55:47.877123 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7"} Jan 21 14:55:47 crc kubenswrapper[4902]: I0121 14:55:47.878304 4902 generic.go:334] "Generic (PLEG): container finished" podID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" exitCode=0 Jan 21 14:55:47 crc kubenswrapper[4902]: I0121 14:55:47.878328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerDied","Data":"0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.408288 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.470478 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.470523 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.500555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") pod \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.500758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") pod \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.500799 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") pod \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\" (UID: \"cabfbeed-c979-4978-bdeb-68ac2c9023a1\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.510256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx" (OuterVolumeSpecName: "kube-api-access-7rzjx") pod "cabfbeed-c979-4978-bdeb-68ac2c9023a1" (UID: "cabfbeed-c979-4978-bdeb-68ac2c9023a1"). InnerVolumeSpecName "kube-api-access-7rzjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.529798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cabfbeed-c979-4978-bdeb-68ac2c9023a1" (UID: "cabfbeed-c979-4978-bdeb-68ac2c9023a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.558315 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data" (OuterVolumeSpecName: "config-data") pod "cabfbeed-c979-4978-bdeb-68ac2c9023a1" (UID: "cabfbeed-c979-4978-bdeb-68ac2c9023a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.604383 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.604657 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cabfbeed-c979-4978-bdeb-68ac2c9023a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.604668 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rzjx\" (UniqueName: \"kubernetes.io/projected/cabfbeed-c979-4978-bdeb-68ac2c9023a1-kube-api-access-7rzjx\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.780460 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.888787 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891882 4902 generic.go:334] "Generic (PLEG): container finished" podID="09716208-ecef-418b-b04b-fcfad53e017d" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" exitCode=0 Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891901 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerDied","Data":"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891913 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891956 4902 scope.go:117] "RemoveContainer" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.891945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09716208-ecef-418b-b04b-fcfad53e017d","Type":"ContainerDied","Data":"34f32efff8c1f6aabcc0c5371906f0935d5fd2d86c65b2814e1ce5ed501c9460"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.896245 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cabfbeed-c979-4978-bdeb-68ac2c9023a1","Type":"ContainerDied","Data":"d489edc5237f4ac81854560a301bc41de7cb9dc499684a0dffb569a687d5db5d"} Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.896454 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.907961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.908671 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs" (OuterVolumeSpecName: "logs") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.908901 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.909005 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.909031 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") pod \"09716208-ecef-418b-b04b-fcfad53e017d\" (UID: \"09716208-ecef-418b-b04b-fcfad53e017d\") " Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.910310 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09716208-ecef-418b-b04b-fcfad53e017d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.913533 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf" (OuterVolumeSpecName: "kube-api-access-9xcdf") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "kube-api-access-9xcdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.914403 4902 scope.go:117] "RemoveContainer" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.936112 4902 scope.go:117] "RemoveContainer" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.936909 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a\": container with ID starting with adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a not found: ID does not exist" containerID="adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.936948 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a"} err="failed to get container status \"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a\": rpc error: code = NotFound desc = could not find container \"adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a\": container with ID starting with adb9b997becd44e11150ccb0cb8fc3883b87165bf455e9bc2b93c71f42dd979a not found: ID does not exist" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.936986 4902 scope.go:117] "RemoveContainer" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.937363 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d\": container with ID starting with 802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d not found: ID does not exist" containerID="802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.937403 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d"} err="failed to get container status \"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d\": rpc error: code = NotFound desc = could not find container \"802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d\": container with ID starting with 802a20d2e4b2438705561d659e18d2d378d08ab6a57e6e96845151348c18101d not found: ID does not exist" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.937428 4902 scope.go:117] "RemoveContainer" containerID="0a33fc5008db4187bd7ddd59bc8804f14a26d7077e5c5b41e22b0a33b1e2dff7" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.950657 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.965424 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.965551 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.986759 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.987315 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987338 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.987358 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987366 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" Jan 21 14:55:48 crc kubenswrapper[4902]: E0121 14:55:48.987397 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987406 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987634 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" containerName="nova-scheduler-scheduler" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987671 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-api" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.987686 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="09716208-ecef-418b-b04b-fcfad53e017d" containerName="nova-api-log" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.988864 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.992199 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data" (OuterVolumeSpecName: "config-data") pod "09716208-ecef-418b-b04b-fcfad53e017d" (UID: "09716208-ecef-418b-b04b-fcfad53e017d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.997004 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:55:48 crc kubenswrapper[4902]: I0121 14:55:48.997014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.015468 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.015496 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xcdf\" (UniqueName: \"kubernetes.io/projected/09716208-ecef-418b-b04b-fcfad53e017d-kube-api-access-9xcdf\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.015506 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09716208-ecef-418b-b04b-fcfad53e017d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:49 crc kubenswrapper[4902]: E0121 14:55:49.026085 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcabfbeed_c979_4978_bdeb_68ac2c9023a1.slice\": RecentStats: unable to find data in memory cache]" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.116627 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.116692 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.116837 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.218120 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.218200 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.218237 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.224236 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.227152 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.227159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.232070 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.251787 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.253606 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.255712 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.264186 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.264603 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"nova-scheduler-0\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.314162 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319383 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319434 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319454 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.319617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422137 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422193 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.422293 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.426090 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.442082 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.470064 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.478998 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"nova-api-0\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.583789 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.810865 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:49 crc kubenswrapper[4902]: I0121 14:55:49.922830 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerStarted","Data":"4096ebf03a14e0e73e0ea175c0db3b27f6bd8591d31d085c11182fa32bf5e186"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.117367 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:50 crc kubenswrapper[4902]: W0121 14:55:50.118962 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5577ca29_6f08_4b68_954f_8bdff5d886cc.slice/crio-96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf WatchSource:0}: Error finding container 96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf: Status 404 returned error can't find the container with id 96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.314831 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09716208-ecef-418b-b04b-fcfad53e017d" path="/var/lib/kubelet/pods/09716208-ecef-418b-b04b-fcfad53e017d/volumes" Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.315984 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabfbeed-c979-4978-bdeb-68ac2c9023a1" path="/var/lib/kubelet/pods/cabfbeed-c979-4978-bdeb-68ac2c9023a1/volumes" Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.940703 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerStarted","Data":"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.942707 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerStarted","Data":"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.942742 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerStarted","Data":"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.942756 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerStarted","Data":"96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf"} Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.960704 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.96068865 podStartE2EDuration="2.96068865s" podCreationTimestamp="2026-01-21 14:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:50.955633336 +0000 UTC m=+1313.032466385" watchObservedRunningTime="2026-01-21 14:55:50.96068865 +0000 UTC m=+1313.037521679" Jan 21 14:55:50 crc kubenswrapper[4902]: I0121 14:55:50.984131 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9841131349999999 podStartE2EDuration="1.984113135s" podCreationTimestamp="2026-01-21 14:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:50.975873466 +0000 UTC m=+1313.052706505" watchObservedRunningTime="2026-01-21 14:55:50.984113135 +0000 UTC m=+1313.060946164" Jan 21 14:55:51 crc kubenswrapper[4902]: I0121 14:55:51.305244 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 14:55:53 crc kubenswrapper[4902]: I0121 14:55:53.469836 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:55:53 crc kubenswrapper[4902]: I0121 14:55:53.470474 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.313381 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.314471 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.484227 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:54 crc kubenswrapper[4902]: I0121 14:55:54.484389 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:56 crc kubenswrapper[4902]: I0121 14:55:56.002631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerStarted","Data":"bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7"} Jan 21 14:55:56 crc kubenswrapper[4902]: I0121 14:55:56.003217 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:55:56 crc kubenswrapper[4902]: I0121 14:55:56.036300 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.694638346 podStartE2EDuration="12.036277201s" podCreationTimestamp="2026-01-21 14:55:44 +0000 UTC" firstStartedPulling="2026-01-21 14:55:45.397770902 +0000 UTC m=+1307.474603931" lastFinishedPulling="2026-01-21 14:55:54.739409737 +0000 UTC m=+1316.816242786" observedRunningTime="2026-01-21 14:55:56.028816192 +0000 UTC m=+1318.105649281" watchObservedRunningTime="2026-01-21 14:55:56.036277201 +0000 UTC m=+1318.113110230" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.315289 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.348116 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.584946 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:59 crc kubenswrapper[4902]: I0121 14:55:59.585036 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:56:00 crc kubenswrapper[4902]: I0121 14:56:00.067015 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:56:00 crc kubenswrapper[4902]: I0121 14:56:00.667358 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:00 crc kubenswrapper[4902]: I0121 14:56:00.668877 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.476698 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.477289 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.485958 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:03 crc kubenswrapper[4902]: I0121 14:56:03.490918 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:06 crc kubenswrapper[4902]: I0121 14:56:06.946531 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.115823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") pod \"c7593ca7-9aeb-4763-8bc3-964147d459ce\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117033 4902 generic.go:334] "Generic (PLEG): container finished" podID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" exitCode=137 Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117128 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerDied","Data":"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b"} Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117408 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7593ca7-9aeb-4763-8bc3-964147d459ce","Type":"ContainerDied","Data":"20c0da8ce9148a9ce1d2bbb934c0cba1985f7cac4f00b74da4ba453452a4725d"} Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117448 4902 scope.go:117] "RemoveContainer" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117450 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") pod \"c7593ca7-9aeb-4763-8bc3-964147d459ce\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117500 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") pod \"c7593ca7-9aeb-4763-8bc3-964147d459ce\" (UID: \"c7593ca7-9aeb-4763-8bc3-964147d459ce\") " Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.117145 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.124435 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n" (OuterVolumeSpecName: "kube-api-access-fnv6n") pod "c7593ca7-9aeb-4763-8bc3-964147d459ce" (UID: "c7593ca7-9aeb-4763-8bc3-964147d459ce"). InnerVolumeSpecName "kube-api-access-fnv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.153327 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data" (OuterVolumeSpecName: "config-data") pod "c7593ca7-9aeb-4763-8bc3-964147d459ce" (UID: "c7593ca7-9aeb-4763-8bc3-964147d459ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.162064 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7593ca7-9aeb-4763-8bc3-964147d459ce" (UID: "c7593ca7-9aeb-4763-8bc3-964147d459ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.219405 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.219467 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnv6n\" (UniqueName: \"kubernetes.io/projected/c7593ca7-9aeb-4763-8bc3-964147d459ce-kube-api-access-fnv6n\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.219488 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7593ca7-9aeb-4763-8bc3-964147d459ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.249332 4902 scope.go:117] "RemoveContainer" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" Jan 21 14:56:07 crc kubenswrapper[4902]: E0121 14:56:07.249859 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b\": container with ID starting with 84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b not found: ID does not exist" containerID="84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.249909 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b"} err="failed to get container status \"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b\": rpc error: code = NotFound desc = could not find container \"84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b\": container with ID starting with 84991ebda977e389d77fef722f29782f550d41b84d63e665d2a84525a03b668b not found: ID does not exist" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.458770 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.471322 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.482173 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: E0121 14:56:07.482729 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.482749 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.482989 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.484641 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.489149 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.489472 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.490339 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.501285 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.627256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.627781 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.627877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.628126 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.628207 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729540 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729684 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.729798 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.736071 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.742779 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.748592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.748829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.754919 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"nova-cell1-novncproxy-0\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:07 crc kubenswrapper[4902]: I0121 14:56:07.828926 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:08 crc kubenswrapper[4902]: I0121 14:56:08.323100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7593ca7-9aeb-4763-8bc3-964147d459ce" path="/var/lib/kubelet/pods/c7593ca7-9aeb-4763-8bc3-964147d459ce/volumes" Jan 21 14:56:08 crc kubenswrapper[4902]: I0121 14:56:08.324728 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.140665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerStarted","Data":"51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99"} Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.141361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerStarted","Data":"140924a047cb28624865b0efcf1a901932347a50fbd34bbfa1c4027f44fbc891"} Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.164914 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.164898131 podStartE2EDuration="2.164898131s" podCreationTimestamp="2026-01-21 14:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:09.160612687 +0000 UTC m=+1331.237445716" watchObservedRunningTime="2026-01-21 14:56:09.164898131 +0000 UTC m=+1331.241731160" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.589964 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.590591 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.590819 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:09 crc kubenswrapper[4902]: I0121 14:56:09.600745 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.149878 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.156341 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.375377 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.377121 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.392682 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.580757 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581128 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581344 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581451 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.581693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683165 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683665 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683783 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683890 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.683988 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684128 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684269 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684772 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684880 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684956 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.684995 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.712139 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"dnsmasq-dns-fcd6f8f8f-gzrwg\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:10 crc kubenswrapper[4902]: I0121 14:56:10.996105 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:11 crc kubenswrapper[4902]: I0121 14:56:11.452291 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.178598 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerID="462406faba8c1d9f8c0864988f3185e2594f2024aa4406a8b2fa2099a7006d0c" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.178836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerDied","Data":"462406faba8c1d9f8c0864988f3185e2594f2024aa4406a8b2fa2099a7006d0c"} Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.179025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerStarted","Data":"e36154beae48e47217e600b25e3832ce07f5b5cba75bd916fc8d19d2d77082ca"} Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375119 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375522 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" containerID="cri-o://73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375678 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" containerID="cri-o://bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375739 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" containerID="cri-o://7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.375780 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" containerID="cri-o://9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.390860 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": EOF" Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.829269 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:12 crc kubenswrapper[4902]: I0121 14:56:12.957739 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202154 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202189 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f" exitCode=2 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202200 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202220 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202279 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.202294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.204198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerStarted","Data":"193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef"} Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.204354 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" containerID="cri-o://ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" gracePeriod=30 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.204394 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" containerID="cri-o://c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" gracePeriod=30 Jan 21 14:56:13 crc kubenswrapper[4902]: I0121 14:56:13.245230 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" podStartSLOduration=3.245208552 podStartE2EDuration="3.245208552s" podCreationTimestamp="2026-01-21 14:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:13.241810321 +0000 UTC m=+1335.318643350" watchObservedRunningTime="2026-01-21 14:56:13.245208552 +0000 UTC m=+1335.322041581" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.217307 4902 generic.go:334] "Generic (PLEG): container finished" podID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" exitCode=143 Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.217455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerDied","Data":"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1"} Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.222721 4902 generic.go:334] "Generic (PLEG): container finished" podID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerID="9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7" exitCode=0 Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.222778 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7"} Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.223096 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.472149 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670587 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670698 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670777 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670845 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670968 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.670991 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.671026 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") pod \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\" (UID: \"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c\") " Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.672248 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.673513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.676968 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c" (OuterVolumeSpecName: "kube-api-access-zc46c") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "kube-api-access-zc46c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.677501 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts" (OuterVolumeSpecName: "scripts") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.704665 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.746238 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.764376 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773283 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773312 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773323 4902 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773332 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773342 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773350 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc46c\" (UniqueName: \"kubernetes.io/projected/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-kube-api-access-zc46c\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.773359 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.799837 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data" (OuterVolumeSpecName: "config-data") pod "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" (UID: "46fd9f2d-9b3f-46b4-9e16-8d0629431b8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4902]: I0121 14:56:14.875801 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.235145 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.235230 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"46fd9f2d-9b3f-46b4-9e16-8d0629431b8c","Type":"ContainerDied","Data":"958f1bdb995189beafc661310dd6706581a17051cfd226d0ba249d51be82b53c"} Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.235365 4902 scope.go:117] "RemoveContainer" containerID="bd52d798dc010916ae8edaeb3274cb5243189094cb02b39553324e926f46afe7" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.262484 4902 scope.go:117] "RemoveContainer" containerID="7ad84b6a868fbbeb6497690a3b9ab62535445a584970624ac9e7776480ccd69f" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.275895 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.285771 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.296357 4902 scope.go:117] "RemoveContainer" containerID="9dcf6007e20245ce52b38ae3acb298e83909872d194cb4df4467e5731bef26a7" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.315625 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316245 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316263 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316280 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316288 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316312 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316317 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: E0121 14:56:15.316336 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316342 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316552 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="sg-core" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316569 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-central-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316582 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="proxy-httpd" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.316595 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" containerName="ceilometer-notification-agent" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.320788 4902 scope.go:117] "RemoveContainer" containerID="73d5c6a2e3d4353a6126370981cecb65384400070335eb42053d76f078ba3998" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.322253 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.324301 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.325667 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.325807 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.330139 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489382 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489495 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489700 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489866 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.489957 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.490176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.490381 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591500 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591587 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591652 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591691 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591716 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.591796 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.592720 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.593077 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.597448 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.597467 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.597502 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.598236 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.599523 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.624500 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"ceilometer-0\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " pod="openstack/ceilometer-0" Jan 21 14:56:15 crc kubenswrapper[4902]: I0121 14:56:15.684832 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:16 crc kubenswrapper[4902]: W0121 14:56:16.138467 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod874c6c46_dedc_4ec9_8ee5_c45ef9cddb53.slice/crio-c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e WatchSource:0}: Error finding container c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e: Status 404 returned error can't find the container with id c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.148595 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.251370 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e"} Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.308406 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fd9f2d-9b3f-46b4-9e16-8d0629431b8c" path="/var/lib/kubelet/pods/46fd9f2d-9b3f-46b4-9e16-8d0629431b8c/volumes" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.832892 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.917853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.917946 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.918745 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.918804 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") pod \"5577ca29-6f08-4b68-954f-8bdff5d886cc\" (UID: \"5577ca29-6f08-4b68-954f-8bdff5d886cc\") " Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.919303 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs" (OuterVolumeSpecName: "logs") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.924302 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq" (OuterVolumeSpecName: "kube-api-access-hv2xq") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "kube-api-access-hv2xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.958165 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data" (OuterVolumeSpecName: "config-data") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4902]: I0121 14:56:16.975262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5577ca29-6f08-4b68-954f-8bdff5d886cc" (UID: "5577ca29-6f08-4b68-954f-8bdff5d886cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020556 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv2xq\" (UniqueName: \"kubernetes.io/projected/5577ca29-6f08-4b68-954f-8bdff5d886cc-kube-api-access-hv2xq\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020587 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020597 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5577ca29-6f08-4b68-954f-8bdff5d886cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.020605 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5577ca29-6f08-4b68-954f-8bdff5d886cc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261407 4902 generic.go:334] "Generic (PLEG): container finished" podID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" exitCode=0 Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261484 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerDied","Data":"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be"} Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5577ca29-6f08-4b68-954f-8bdff5d886cc","Type":"ContainerDied","Data":"96e9941d2b125212d0ca1c62e5f8963a43be81ae9253acc0176ed7621b2023bf"} Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.261885 4902 scope.go:117] "RemoveContainer" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.263765 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110"} Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.284815 4902 scope.go:117] "RemoveContainer" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.308598 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.316419 4902 scope.go:117] "RemoveContainer" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.319327 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be\": container with ID starting with c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be not found: ID does not exist" containerID="c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.319379 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be"} err="failed to get container status \"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be\": rpc error: code = NotFound desc = could not find container \"c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be\": container with ID starting with c7fee5b97b637c6b5f7d26e3b07dc2cde098647c8ed11221ea9394660771c3be not found: ID does not exist" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.319403 4902 scope.go:117] "RemoveContainer" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.319417 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.324315 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1\": container with ID starting with ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1 not found: ID does not exist" containerID="ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.324370 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1"} err="failed to get container status \"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1\": rpc error: code = NotFound desc = could not find container \"ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1\": container with ID starting with ace262feda10e2e9c07ee0fca4a9f7be7b06d1e3cac3c365efa333a77991ffb1 not found: ID does not exist" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344159 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.344646 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344666 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" Jan 21 14:56:17 crc kubenswrapper[4902]: E0121 14:56:17.344696 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344945 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-log" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.344974 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" containerName="nova-api-api" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.346126 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.351876 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.352286 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.352436 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.354370 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538425 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538454 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538527 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.538571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639642 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639774 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639844 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.639863 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.640537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.643272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.643587 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.644287 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.648574 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.662472 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"nova-api-0\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.692262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.829529 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:17 crc kubenswrapper[4902]: I0121 14:56:17.855485 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.180646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:18 crc kubenswrapper[4902]: W0121 14:56:18.183607 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd799afd_06ad_483d_b59d_9b5c1e947a6a.slice/crio-e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2 WatchSource:0}: Error finding container e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2: Status 404 returned error can't find the container with id e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2 Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.285370 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerStarted","Data":"e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2"} Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.288546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67"} Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.306895 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5577ca29-6f08-4b68-954f-8bdff5d886cc" path="/var/lib/kubelet/pods/5577ca29-6f08-4b68-954f-8bdff5d886cc/volumes" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.309624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.494087 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.495277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.506414 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.509943 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.546380 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.563801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.563949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.563985 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.564007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664784 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.664855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.668860 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.670066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.672651 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.681619 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"nova-cell1-cell-mapping-2k78d\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:18 crc kubenswrapper[4902]: I0121 14:56:18.691744 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.164960 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:56:19 crc kubenswrapper[4902]: W0121 14:56:19.166658 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f3ab19a_d650_41ea_aadd_8ec73ed824f2.slice/crio-a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2 WatchSource:0}: Error finding container a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2: Status 404 returned error can't find the container with id a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2 Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.301864 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.303710 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerStarted","Data":"a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.306440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerStarted","Data":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.306709 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerStarted","Data":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} Jan 21 14:56:19 crc kubenswrapper[4902]: I0121 14:56:19.339916 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.339892976 podStartE2EDuration="2.339892976s" podCreationTimestamp="2026-01-21 14:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:19.333509895 +0000 UTC m=+1341.410342924" watchObservedRunningTime="2026-01-21 14:56:19.339892976 +0000 UTC m=+1341.416726025" Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.318030 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerStarted","Data":"878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8"} Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.322599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerStarted","Data":"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0"} Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.340152 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2k78d" podStartSLOduration=2.340134377 podStartE2EDuration="2.340134377s" podCreationTimestamp="2026-01-21 14:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:20.335609576 +0000 UTC m=+1342.412442605" watchObservedRunningTime="2026-01-21 14:56:20.340134377 +0000 UTC m=+1342.416967406" Jan 21 14:56:20 crc kubenswrapper[4902]: I0121 14:56:20.997282 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.023007 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.191687083 podStartE2EDuration="6.022981792s" podCreationTimestamp="2026-01-21 14:56:15 +0000 UTC" firstStartedPulling="2026-01-21 14:56:16.143593835 +0000 UTC m=+1338.220426864" lastFinishedPulling="2026-01-21 14:56:19.974888524 +0000 UTC m=+1342.051721573" observedRunningTime="2026-01-21 14:56:20.364490297 +0000 UTC m=+1342.441323336" watchObservedRunningTime="2026-01-21 14:56:21.022981792 +0000 UTC m=+1343.099814821" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.052032 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.052470 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" containerID="cri-o://33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc" gracePeriod=10 Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.332558 4902 generic.go:334] "Generic (PLEG): container finished" podID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerID="33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc" exitCode=0 Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.332683 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerDied","Data":"33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc"} Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.333979 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.566599 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719353 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719726 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719938 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719974 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.719993 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.720018 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") pod \"38f0c216-daa1-42c6-9105-11ad7d5fc686\" (UID: \"38f0c216-daa1-42c6-9105-11ad7d5fc686\") " Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.729253 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69" (OuterVolumeSpecName: "kube-api-access-zqp69") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "kube-api-access-zqp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.781939 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.784586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config" (OuterVolumeSpecName: "config") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.786374 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.796807 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.814546 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38f0c216-daa1-42c6-9105-11ad7d5fc686" (UID: "38f0c216-daa1-42c6-9105-11ad7d5fc686"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821764 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821803 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821819 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821833 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821845 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38f0c216-daa1-42c6-9105-11ad7d5fc686-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4902]: I0121 14:56:21.821857 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqp69\" (UniqueName: \"kubernetes.io/projected/38f0c216-daa1-42c6-9105-11ad7d5fc686-kube-api-access-zqp69\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.346980 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.347458 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" event={"ID":"38f0c216-daa1-42c6-9105-11ad7d5fc686","Type":"ContainerDied","Data":"39dee8363ceb2e4e3e0e527468730e2f88866ca83679486319418b5875a94a82"} Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.347486 4902 scope.go:117] "RemoveContainer" containerID="33ac3053e080a371fb9c1294b84f90b6187ab8ed37ebcb04994475127b9d12dc" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.380538 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.381632 4902 scope.go:117] "RemoveContainer" containerID="e33abee5a4d9568ebeebc43b93ca969e5d4b5cadc5a0cff7461433d918dfb71d" Jan 21 14:56:22 crc kubenswrapper[4902]: I0121 14:56:22.390136 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-2m4b6"] Jan 21 14:56:24 crc kubenswrapper[4902]: I0121 14:56:24.305576 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" path="/var/lib/kubelet/pods/38f0c216-daa1-42c6-9105-11ad7d5fc686/volumes" Jan 21 14:56:24 crc kubenswrapper[4902]: I0121 14:56:24.368014 4902 generic.go:334] "Generic (PLEG): container finished" podID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerID="878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8" exitCode=0 Jan 21 14:56:24 crc kubenswrapper[4902]: I0121 14:56:24.368070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerDied","Data":"878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8"} Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.751522 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.797148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.798802 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.869947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.869975 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data" (OuterVolumeSpecName: "config-data") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.901497 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.901566 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") pod \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\" (UID: \"8f3ab19a-d650-41ea-aadd-8ec73ed824f2\") " Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.902400 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.902421 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.906062 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr" (OuterVolumeSpecName: "kube-api-access-f9mgr") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "kube-api-access-f9mgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:25 crc kubenswrapper[4902]: I0121 14:56:25.906105 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts" (OuterVolumeSpecName: "scripts") pod "8f3ab19a-d650-41ea-aadd-8ec73ed824f2" (UID: "8f3ab19a-d650-41ea-aadd-8ec73ed824f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.004833 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.004865 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9mgr\" (UniqueName: \"kubernetes.io/projected/8f3ab19a-d650-41ea-aadd-8ec73ed824f2-kube-api-access-f9mgr\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.300412 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-647df7b8c5-2m4b6" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.387910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2k78d" event={"ID":"8f3ab19a-d650-41ea-aadd-8ec73ed824f2","Type":"ContainerDied","Data":"a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2"} Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.388278 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6af641e9642822dba90080f660756012cdf63aa5e5e1d6680572bc43b1b15f2" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.388161 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2k78d" Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.565840 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.566152 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" containerID="cri-o://2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.576350 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.576628 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" containerID="cri-o://1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.576671 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" containerID="cri-o://946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.641256 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.641531 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" containerID="cri-o://af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" gracePeriod=30 Jan 21 14:56:26 crc kubenswrapper[4902]: I0121 14:56:26.641635 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" containerID="cri-o://f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" gracePeriod=30 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.151399 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.327868 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.327941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328026 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328078 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328121 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328149 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") pod \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\" (UID: \"cd799afd-06ad-483d-b59d-9b5c1e947a6a\") " Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328258 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs" (OuterVolumeSpecName: "logs") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.328508 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd799afd-06ad-483d-b59d-9b5c1e947a6a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.332756 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5" (OuterVolumeSpecName: "kube-api-access-2w9q5") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "kube-api-access-2w9q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.360173 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.360246 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data" (OuterVolumeSpecName: "config-data") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.391704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.393936 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cd799afd-06ad-483d-b59d-9b5c1e947a6a" (UID: "cd799afd-06ad-483d-b59d-9b5c1e947a6a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.401018 4902 generic.go:334] "Generic (PLEG): container finished" podID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" exitCode=143 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.401119 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerDied","Data":"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403017 4902 generic.go:334] "Generic (PLEG): container finished" podID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" exitCode=0 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403036 4902 generic.go:334] "Generic (PLEG): container finished" podID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" exitCode=143 Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403066 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerDied","Data":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerDied","Data":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403095 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cd799afd-06ad-483d-b59d-9b5c1e947a6a","Type":"ContainerDied","Data":"e337f3f5a33903a3ca45aa510f3c236212e8e9e8cef1f827f67fc4fbb4689ba2"} Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403111 4902 scope.go:117] "RemoveContainer" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.403227 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431481 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431738 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431834 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431914 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9q5\" (UniqueName: \"kubernetes.io/projected/cd799afd-06ad-483d-b59d-9b5c1e947a6a-kube-api-access-2w9q5\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.431997 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd799afd-06ad-483d-b59d-9b5c1e947a6a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.438703 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.444459 4902 scope.go:117] "RemoveContainer" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.449435 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.463285 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.463969 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.463992 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464033 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="init" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464075 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="init" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464098 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerName="nova-manage" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464108 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerName="nova-manage" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464121 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464154 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.464174 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.464183 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465202 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-log" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465234 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" containerName="nova-manage" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465254 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" containerName="nova-api-api" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.465265 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f0c216-daa1-42c6-9105-11ad7d5fc686" containerName="dnsmasq-dns" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.466692 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.468622 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.468708 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.468718 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.471678 4902 scope.go:117] "RemoveContainer" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.472146 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": container with ID starting with 946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e not found: ID does not exist" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472170 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} err="failed to get container status \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": rpc error: code = NotFound desc = could not find container \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": container with ID starting with 946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472188 4902 scope.go:117] "RemoveContainer" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: E0121 14:56:27.472551 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": container with ID starting with 1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0 not found: ID does not exist" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472570 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} err="failed to get container status \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": rpc error: code = NotFound desc = could not find container \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": container with ID starting with 1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0 not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472584 4902 scope.go:117] "RemoveContainer" containerID="946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472924 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e"} err="failed to get container status \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": rpc error: code = NotFound desc = could not find container \"946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e\": container with ID starting with 946a6a01b06d7b8976608a1730f52953f85b648fefe11828afb8591ad695188e not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.472941 4902 scope.go:117] "RemoveContainer" containerID="1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.475731 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.477342 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0"} err="failed to get container status \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": rpc error: code = NotFound desc = could not find container \"1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0\": container with ID starting with 1428f2bed7674de965c391bfd60cb7a3116c947c3b1e7ebcaf5b25766df602a0 not found: ID does not exist" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.635997 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636348 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636660 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.636878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.738932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739015 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739061 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739090 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.739774 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.744961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.745575 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.746117 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.746125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.760795 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"nova-api-0\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " pod="openstack/nova-api-0" Jan 21 14:56:27 crc kubenswrapper[4902]: I0121 14:56:27.789702 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.264873 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.318810 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd799afd-06ad-483d-b59d-9b5c1e947a6a" path="/var/lib/kubelet/pods/cd799afd-06ad-483d-b59d-9b5c1e947a6a/volumes" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.334193 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445740 4902 generic.go:334] "Generic (PLEG): container finished" podID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" exitCode=0 Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445836 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerDied","Data":"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096"} Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445869 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2cfa319b-3748-4cf5-9254-2af8ad04ffdc","Type":"ContainerDied","Data":"4096ebf03a14e0e73e0ea175c0db3b27f6bd8591d31d085c11182fa32bf5e186"} Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.445888 4902 scope.go:117] "RemoveContainer" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.446072 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.448895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerStarted","Data":"6a0fa8e1aa73ccaec735410bd00188e5105d8445c279b0829562f3033236ffec"} Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.450454 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") pod \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.450619 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") pod \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.450743 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") pod \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\" (UID: \"2cfa319b-3748-4cf5-9254-2af8ad04ffdc\") " Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.460357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm" (OuterVolumeSpecName: "kube-api-access-c97cm") pod "2cfa319b-3748-4cf5-9254-2af8ad04ffdc" (UID: "2cfa319b-3748-4cf5-9254-2af8ad04ffdc"). InnerVolumeSpecName "kube-api-access-c97cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.474301 4902 scope.go:117] "RemoveContainer" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" Jan 21 14:56:28 crc kubenswrapper[4902]: E0121 14:56:28.474826 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096\": container with ID starting with 2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096 not found: ID does not exist" containerID="2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.474884 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096"} err="failed to get container status \"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096\": rpc error: code = NotFound desc = could not find container \"2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096\": container with ID starting with 2210a960bb93e1771d1c550ad958510beb26a143a53531deaa2077a32a083096 not found: ID does not exist" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.480678 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data" (OuterVolumeSpecName: "config-data") pod "2cfa319b-3748-4cf5-9254-2af8ad04ffdc" (UID: "2cfa319b-3748-4cf5-9254-2af8ad04ffdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.481530 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cfa319b-3748-4cf5-9254-2af8ad04ffdc" (UID: "2cfa319b-3748-4cf5-9254-2af8ad04ffdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.554546 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.554587 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c97cm\" (UniqueName: \"kubernetes.io/projected/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-kube-api-access-c97cm\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.554603 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfa319b-3748-4cf5-9254-2af8ad04ffdc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.786985 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.797799 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.812773 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: E0121 14:56:28.813198 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.813219 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.813387 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" containerName="nova-scheduler-scheduler" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.814007 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.816624 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.828012 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.961800 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.961876 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:28 crc kubenswrapper[4902]: I0121 14:56:28.962164 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.063280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.063696 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.063823 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.069764 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.069783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.086929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"nova-scheduler-0\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.128883 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.460702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerStarted","Data":"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748"} Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.461116 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerStarted","Data":"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f"} Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.493799 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.493769527 podStartE2EDuration="2.493769527s" podCreationTimestamp="2026-01-21 14:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:29.477118903 +0000 UTC m=+1351.553951962" watchObservedRunningTime="2026-01-21 14:56:29.493769527 +0000 UTC m=+1351.570602586" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.605494 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.780556 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:47370->10.217.0.191:8775: read: connection reset by peer" Jan 21 14:56:29 crc kubenswrapper[4902]: I0121 14:56:29.780879 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:47386->10.217.0.191:8775: read: connection reset by peer" Jan 21 14:56:29 crc kubenswrapper[4902]: E0121 14:56:29.972566 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8f8e17e_d3bd_45ef_bc21_53e079ab3b4a.slice/crio-conmon-f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.231566 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.312009 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfa319b-3748-4cf5-9254-2af8ad04ffdc" path="/var/lib/kubelet/pods/2cfa319b-3748-4cf5-9254-2af8ad04ffdc/volumes" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.386959 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387278 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387310 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.387377 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") pod \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\" (UID: \"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a\") " Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.388463 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs" (OuterVolumeSpecName: "logs") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.393032 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v" (OuterVolumeSpecName: "kube-api-access-4g62v") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "kube-api-access-4g62v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.419298 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.420139 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data" (OuterVolumeSpecName: "config-data") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.444663 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" (UID: "a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.472459 4902 generic.go:334] "Generic (PLEG): container finished" podID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" exitCode=0 Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.472525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerDied","Data":"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.473606 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a","Type":"ContainerDied","Data":"420d4e5dbc151ab2860e03ff284c833763ae6b775900cb8f0097accb8dfdab8c"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.473659 4902 scope.go:117] "RemoveContainer" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.472535 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.478704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerStarted","Data":"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.478753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerStarted","Data":"f71b431a165886dfcb60b7772fbf29ab480085d500ccd4f828f82ea85ca3c58b"} Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489363 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489396 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489406 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g62v\" (UniqueName: \"kubernetes.io/projected/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-kube-api-access-4g62v\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489415 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.489424 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.506219 4902 scope.go:117] "RemoveContainer" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.515208 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5151871630000002 podStartE2EDuration="2.515187163s" podCreationTimestamp="2026-01-21 14:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:30.501296903 +0000 UTC m=+1352.578129932" watchObservedRunningTime="2026-01-21 14:56:30.515187163 +0000 UTC m=+1352.592020192" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.557022 4902 scope.go:117] "RemoveContainer" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.557819 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.559166 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675\": container with ID starting with f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675 not found: ID does not exist" containerID="f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.559260 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675"} err="failed to get container status \"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675\": rpc error: code = NotFound desc = could not find container \"f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675\": container with ID starting with f071e59e3b9ec920f7f24ff40ef1372da857864b80456387fba7e7870751b675 not found: ID does not exist" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.559299 4902 scope.go:117] "RemoveContainer" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.559627 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7\": container with ID starting with af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7 not found: ID does not exist" containerID="af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.559658 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7"} err="failed to get container status \"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7\": rpc error: code = NotFound desc = could not find container \"af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7\": container with ID starting with af9f073804982bbf0942a376bb18356eea225f84098032d4038cc46194a331e7 not found: ID does not exist" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.581484 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592167 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.592630 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592654 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" Jan 21 14:56:30 crc kubenswrapper[4902]: E0121 14:56:30.592731 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592739 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592945 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-metadata" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.592967 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" containerName="nova-metadata-log" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.594074 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.596619 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.597664 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.601013 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.694158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.694697 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.695260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.695502 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.695578 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797488 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797566 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797602 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797718 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.797750 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.798361 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.803201 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.803528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.803678 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.816400 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"nova-metadata-0\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " pod="openstack/nova-metadata-0" Jan 21 14:56:30 crc kubenswrapper[4902]: I0121 14:56:30.914802 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:31 crc kubenswrapper[4902]: I0121 14:56:31.393081 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:31 crc kubenswrapper[4902]: I0121 14:56:31.492891 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerStarted","Data":"34e826e9786b7ad724ed0dc96336ea0075c6129a9fc9742797a8ae0fd3c41773"} Jan 21 14:56:32 crc kubenswrapper[4902]: I0121 14:56:32.306030 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a" path="/var/lib/kubelet/pods/a8f8e17e-d3bd-45ef-bc21-53e079ab3b4a/volumes" Jan 21 14:56:32 crc kubenswrapper[4902]: I0121 14:56:32.503499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerStarted","Data":"6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377"} Jan 21 14:56:32 crc kubenswrapper[4902]: I0121 14:56:32.503544 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerStarted","Data":"090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e"} Jan 21 14:56:34 crc kubenswrapper[4902]: I0121 14:56:34.129882 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:56:35 crc kubenswrapper[4902]: I0121 14:56:35.915301 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:56:35 crc kubenswrapper[4902]: I0121 14:56:35.915743 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:56:37 crc kubenswrapper[4902]: I0121 14:56:37.790317 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:56:37 crc kubenswrapper[4902]: I0121 14:56:37.790587 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:56:38 crc kubenswrapper[4902]: I0121 14:56:38.804223 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:38 crc kubenswrapper[4902]: I0121 14:56:38.804297 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.130135 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.166650 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.186184 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.186167958 podStartE2EDuration="9.186167958s" podCreationTimestamp="2026-01-21 14:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:32.527642515 +0000 UTC m=+1354.604475544" watchObservedRunningTime="2026-01-21 14:56:39.186167958 +0000 UTC m=+1361.263000987" Jan 21 14:56:39 crc kubenswrapper[4902]: I0121 14:56:39.605287 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:56:40 crc kubenswrapper[4902]: I0121 14:56:40.915842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:56:40 crc kubenswrapper[4902]: I0121 14:56:40.916092 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:56:41 crc kubenswrapper[4902]: I0121 14:56:41.930219 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:41 crc kubenswrapper[4902]: I0121 14:56:41.930249 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:45 crc kubenswrapper[4902]: I0121 14:56:45.693057 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.796956 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.797774 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.802675 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:56:47 crc kubenswrapper[4902]: I0121 14:56:47.804645 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:48 crc kubenswrapper[4902]: I0121 14:56:48.657080 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:56:48 crc kubenswrapper[4902]: I0121 14:56:48.663596 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:56:50 crc kubenswrapper[4902]: I0121 14:56:50.921263 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:50 crc kubenswrapper[4902]: I0121 14:56:50.922808 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:56:50 crc kubenswrapper[4902]: I0121 14:56:50.927373 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:51 crc kubenswrapper[4902]: I0121 14:56:51.710209 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.604425 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.605860 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.607924 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.618483 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.722441 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.722555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.824588 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.824699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.825645 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.878184 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.890733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"root-account-create-update-zbrd5\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.911784 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.916675 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.940812 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.945088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:57:10 crc kubenswrapper[4902]: I0121 14:57:10.979788 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gvjmj"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.030319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.030456 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.065150 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.065377 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" containerID="cri-o://c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" gracePeriod=2 Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.097889 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.113287 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.133535 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.133676 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.134940 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.138158 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.176112 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9bb1-account-create-update-f5hbr"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.204328 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.235375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.235598 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" containerID="cri-o://c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" gracePeriod=30 Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.235971 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" containerID="cri-o://e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db" gracePeriod=30 Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.237639 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"barbican-9bb1-account-create-update-dbdlg\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.261101 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:11 crc kubenswrapper[4902]: E0121 14:57:11.261523 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.261543 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.261720 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerName="openstackclient" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.262298 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.267576 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.274007 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.285109 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.294355 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:11 crc kubenswrapper[4902]: I0121 14:57:11.295030 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7226-account-create-update-krlk5"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.336905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.336996 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.369175 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.405050 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.416382 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ktqgj"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.436174 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6f87-account-create-update-w85cg"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.438377 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.438522 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.440106 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.440350 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.440396 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:11.940380258 +0000 UTC m=+1394.017213287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.538245 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"cinder-7226-account-create-update-dvfjh\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.605781 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.606952 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.608496 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.615217 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.662120 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-b64dh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.716645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.716745 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.748224 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.784727 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.803790 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.820183 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.820223 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.821239 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.839563 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.852986 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.921080 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.921139 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.963089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"nova-api-6f87-account-create-update-rx9dv\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:11.980598 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff2c3d8_2d68_4255_a175_21f0df1b9276.slice/crio-e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:11.917410 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.019785 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.019979 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-c27gh" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" containerID="cri-o://1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.026518 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.026573 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:13.026556951 +0000 UTC m=+1395.103389980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.032806 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerID="e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db" exitCode=2 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.032843 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerDied","Data":"e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db"} Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.054552 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-lmnmw"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.116527 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.132488 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.139199 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.150885 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.156835 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.181813 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.193237 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-457b-account-create-update-2trwh"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.203538 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.214312 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.226024 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7ca2-account-create-update-tz26x"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.234797 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-twg7k"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.263743 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.263994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" containerID="cri-o://193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef" gracePeriod=10 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.305778 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.334768 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.334958 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.360009 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b1040-d368-4b6d-a4db-ba7c626f666f" path="/var/lib/kubelet/pods/137b1040-d368-4b6d-a4db-ba7c626f666f/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.361134 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249b1461-ed19-4572-b1e6-c5c44cfa9145" path="/var/lib/kubelet/pods/249b1461-ed19-4572-b1e6-c5c44cfa9145/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.361896 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b401edf-e2ca-4abb-adb7-008ce32403b1" path="/var/lib/kubelet/pods/3b401edf-e2ca-4abb-adb7-008ce32403b1/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.372204 4902 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-kxwsm" message=< Jan 21 14:57:12 crc kubenswrapper[4902]: Exiting ovn-controller (1) [ OK ] Jan 21 14:57:12 crc kubenswrapper[4902]: > Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.372237 4902 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-kxwsm" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" containerID="cri-o://339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.372269 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-kxwsm" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" containerID="cri-o://339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.373678 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6baf26e6-f197-4ae1-b7a5-40a1147e3276" path="/var/lib/kubelet/pods/6baf26e6-f197-4ae1-b7a5-40a1147e3276/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.374928 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83490157-abed-443f-8843-945bb43715af" path="/var/lib/kubelet/pods/83490157-abed-443f-8843-945bb43715af/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.375630 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab17d58c-9dc5-4a20-8ca7-3d06256080c3" path="/var/lib/kubelet/pods/ab17d58c-9dc5-4a20-8ca7-3d06256080c3/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.376272 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fa0e74-137e-4ff6-9610-37b9ebe612c9" path="/var/lib/kubelet/pods/d0fa0e74-137e-4ff6-9610-37b9ebe612c9/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.427303 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0110fe-ef40-4a4b-bad7-a3c24aa5089a" path="/var/lib/kubelet/pods/dd0110fe-ef40-4a4b-bad7-a3c24aa5089a/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.428317 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7eab019-1ec9-4109-93f8-2f3caa1fa508" path="/var/lib/kubelet/pods/e7eab019-1ec9-4109-93f8-2f3caa1fa508/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429090 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5e91bc-7b75-4275-b1b6-998431981fca" path="/var/lib/kubelet/pods/eb5e91bc-7b75-4275-b1b6-998431981fca/volumes" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429879 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429920 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429935 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zlh54"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429953 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4ds4z"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429966 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.429978 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-hmcs2"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.430364 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" containerID="cri-o://f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.435524 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.435932 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ddf9d8f68-jjk7f" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" containerID="cri-o://bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.436150 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7ddf9d8f68-jjk7f" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" containerID="cri-o://51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.437391 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.437630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.444581 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.463117 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.463859 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" containerID="cri-o://fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.464675 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.484800 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.486575 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" containerID="cri-o://69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487016 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" containerID="cri-o://71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487099 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" containerID="cri-o://a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487149 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" containerID="cri-o://589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487199 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" containerID="cri-o://6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487249 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" containerID="cri-o://0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487307 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" containerID="cri-o://a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487570 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" containerID="cri-o://fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487625 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" containerID="cri-o://eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487674 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" containerID="cri-o://756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487719 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" containerID="cri-o://c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487763 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" containerID="cri-o://df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487809 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" containerID="cri-o://b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487852 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" containerID="cri-o://723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.487898 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" containerID="cri-o://ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.504941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.527819 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.528159 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" containerID="cri-o://11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.528354 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" containerID="cri-o://29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.596479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"nova-cell0-7df7-account-create-update-vrg52\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.644750 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.654712 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: E0121 14:57:12.654821 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:13.154792276 +0000 UTC m=+1395.231625305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.676509 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" containerID="cri-o://00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.725177 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2k78d"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.828156 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.853196 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.889596 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hlnnm"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.917254 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" containerID="cri-o://9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" gracePeriod=300 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.962651 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.962955 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" containerID="cri-o://669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.963466 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" containerID="cri-o://c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001" gracePeriod=30 Jan 21 14:57:12 crc kubenswrapper[4902]: I0121 14:57:12.994832 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.013667 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-62fdp"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.019978 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.020713 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.021209 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.021271 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.034171 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.034528 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" containerID="cri-o://baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.035348 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" containerID="cri-o://635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.083310 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.083373 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:15.083359673 +0000 UTC m=+1397.160192702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.111886 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.112158 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" containerID="cri-o://d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.112553 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" containerID="cri-o://4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.146826 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.149444 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerID="11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd" exitCode=143 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.149495 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerDied","Data":"11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.159233 4902 generic.go:334] "Generic (PLEG): container finished" podID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerID="339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.159305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerDied","Data":"339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164795 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_55191d4e-0310-4e6a-a10c-902e0cc8a209/ovsdbserver-sb/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164834 4902 generic.go:334] "Generic (PLEG): container finished" podID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerID="f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803" exitCode=2 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164850 4902 generic.go:334] "Generic (PLEG): container finished" podID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerID="00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1" exitCode=143 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerDied","Data":"f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.164906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerDied","Data":"00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166518 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-c27gh_8891f80f-6cb0-4dc6-9f92-836d465e1c84/openstack-network-exporter/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166544 4902 generic.go:334] "Generic (PLEG): container finished" podID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerID="1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244" exitCode=2 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerDied","Data":"1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c27gh" event={"ID":"8891f80f-6cb0-4dc6-9f92-836d465e1c84","Type":"ContainerDied","Data":"b07d2a04235629b220fbd6c246ba8a8b5088d31b321ecb0ba20c9950895f0f74"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.166626 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07d2a04235629b220fbd6c246ba8a8b5088d31b321ecb0ba20c9950895f0f74" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.171170 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x9wcg"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.175390 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.185803 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.185868 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:14.185853994 +0000 UTC m=+1396.262687023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.203206 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a0c6-account-create-update-g2pwx"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.212897 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.238204 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-431b-account-create-update-trwhd"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258104 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258132 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258139 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258145 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258152 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258158 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258164 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258170 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258176 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258183 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258190 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258196 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258236 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258263 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258289 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258297 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258312 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258320 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.258345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.259996 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerID="193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef" exitCode=0 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.260059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerDied","Data":"193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.261411 4902 generic.go:334] "Generic (PLEG): container finished" podID="b71fc896-318c-4277-bb32-70e3424a26c9" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" exitCode=143 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.261458 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerDied","Data":"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.263125 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/ovsdbserver-nb/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.263161 4902 generic.go:334] "Generic (PLEG): container finished" podID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerID="fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931" exitCode=2 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.263181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerDied","Data":"fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931"} Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.310062 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-c27gh_8891f80f-6cb0-4dc6-9f92-836d465e1c84/openstack-network-exporter/0.log" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.310331 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.322760 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.344385 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" containerID="cri-o://0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" gracePeriod=29 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.347763 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.348003 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887695489-rtxbl" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" containerID="cri-o://51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.348190 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7887695489-rtxbl" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" containerID="cri-o://2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397340 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397382 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397437 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397483 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397505 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397560 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397613 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397665 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397725 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") pod \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\" (UID: \"8891f80f-6cb0-4dc6-9f92-836d465e1c84\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.397748 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") pod \"e8135258-f03d-4c9a-be6f-7dd1dd099188\" (UID: \"e8135258-f03d-4c9a-be6f-7dd1dd099188\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.398943 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.403179 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.403255 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run" (OuterVolumeSpecName: "var-run") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.408446 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config" (OuterVolumeSpecName: "config") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.408515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.408536 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.410115 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.413633 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts" (OuterVolumeSpecName: "scripts") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.441423 4902 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 14:57:13 crc kubenswrapper[4902]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:57:13 crc kubenswrapper[4902]: + source /usr/local/bin/container-scripts/functions Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNBridge=br-int Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNEncapType=geneve Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNAvailabilityZones= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ EnableChassisAsGateway=true Jan 21 14:57:13 crc kubenswrapper[4902]: ++ PhysicalNetworks= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNHostName= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:57:13 crc kubenswrapper[4902]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:57:13 crc kubenswrapper[4902]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + cleanup_ovsdb_server_semaphore Jan 21 14:57:13 crc kubenswrapper[4902]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:57:13 crc kubenswrapper[4902]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-4sm9h" message=< Jan 21 14:57:13 crc kubenswrapper[4902]: Exiting ovsdb-server (5) [ OK ] Jan 21 14:57:13 crc kubenswrapper[4902]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:57:13 crc kubenswrapper[4902]: + source /usr/local/bin/container-scripts/functions Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNBridge=br-int Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNEncapType=geneve Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNAvailabilityZones= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ EnableChassisAsGateway=true Jan 21 14:57:13 crc kubenswrapper[4902]: ++ PhysicalNetworks= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNHostName= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:57:13 crc kubenswrapper[4902]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:57:13 crc kubenswrapper[4902]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + cleanup_ovsdb_server_semaphore Jan 21 14:57:13 crc kubenswrapper[4902]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:57:13 crc kubenswrapper[4902]: > Jan 21 14:57:13 crc kubenswrapper[4902]: E0121 14:57:13.441471 4902 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 14:57:13 crc kubenswrapper[4902]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:57:13 crc kubenswrapper[4902]: + source /usr/local/bin/container-scripts/functions Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNBridge=br-int Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNEncapType=geneve Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNAvailabilityZones= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ EnableChassisAsGateway=true Jan 21 14:57:13 crc kubenswrapper[4902]: ++ PhysicalNetworks= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ OVNHostName= Jan 21 14:57:13 crc kubenswrapper[4902]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:57:13 crc kubenswrapper[4902]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:57:13 crc kubenswrapper[4902]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:57:13 crc kubenswrapper[4902]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + sleep 0.5 Jan 21 14:57:13 crc kubenswrapper[4902]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:57:13 crc kubenswrapper[4902]: + cleanup_ovsdb_server_semaphore Jan 21 14:57:13 crc kubenswrapper[4902]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:57:13 crc kubenswrapper[4902]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:57:13 crc kubenswrapper[4902]: > pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" containerID="cri-o://df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.441512 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" containerID="cri-o://df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" gracePeriod=29 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.442813 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb" (OuterVolumeSpecName: "kube-api-access-smxgb") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "kube-api-access-smxgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.443569 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz" (OuterVolumeSpecName: "kube-api-access-x5wqz") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "kube-api-access-x5wqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.480160 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nxvvs"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500062 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500088 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5wqz\" (UniqueName: \"kubernetes.io/projected/8891f80f-6cb0-4dc6-9f92-836d465e1c84-kube-api-access-x5wqz\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500097 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8891f80f-6cb0-4dc6-9f92-836d465e1c84-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500106 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500114 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500121 4902 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8891f80f-6cb0-4dc6-9f92-836d465e1c84-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500129 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smxgb\" (UniqueName: \"kubernetes.io/projected/e8135258-f03d-4c9a-be6f-7dd1dd099188-kube-api-access-smxgb\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500137 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e8135258-f03d-4c9a-be6f-7dd1dd099188-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.500146 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8135258-f03d-4c9a-be6f-7dd1dd099188-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.503450 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.504255 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.523017 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.545348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.547210 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.563193 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4czjl"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.603088 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.603113 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.608891 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8891f80f-6cb0-4dc6-9f92-836d465e1c84" (UID: "8891f80f-6cb0-4dc6-9f92-836d465e1c84"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.617902 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "e8135258-f03d-4c9a-be6f-7dd1dd099188" (UID: "e8135258-f03d-4c9a-be6f-7dd1dd099188"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.643191 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.643463 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" containerID="cri-o://090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.643883 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" containerID="cri-o://6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.664677 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.671910 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lwq2z"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.678102 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.684724 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-np7hz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.694778 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.695001 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" containerID="cri-o://f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.695370 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" containerID="cri-o://c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704217 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" containerID="cri-o://6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704357 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704572 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" containerID="cri-o://b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.704902 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" containerID="cri-o://709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705678 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705702 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705752 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705826 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.705991 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") pod \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\" (UID: \"5ef26f87-2d73-4847-abfb-a3bbda8c01c6\") " Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.706478 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8135258-f03d-4c9a-be6f-7dd1dd099188-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.706502 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8891f80f-6cb0-4dc6-9f92-836d465e1c84-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.727563 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.734075 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj" (OuterVolumeSpecName: "kube-api-access-j28lj") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "kube-api-access-j28lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.761201 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.761520 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-68564cb5c-bh98h" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" containerID="cri-o://43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.762149 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-68564cb5c-bh98h" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" containerID="cri-o://5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.789423 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.794259 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.794568 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" containerID="cri-o://155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.795106 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" containerID="cri-o://fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" gracePeriod=30 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.801420 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.803355 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.808012 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.810313 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j28lj\" (UniqueName: \"kubernetes.io/projected/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-kube-api-access-j28lj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.810342 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.822426 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.833652 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vcplz"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.845284 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.878274 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" containerID="cri-o://9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" gracePeriod=604800 Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.878447 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.889704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.911907 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.911934 4902 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.917642 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config" (OuterVolumeSpecName: "config") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:13 crc kubenswrapper[4902]: I0121 14:57:13.951416 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-rkcxd"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.020939 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.022957 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ef26f87-2d73-4847-abfb-a3bbda8c01c6" (UID: "5ef26f87-2d73-4847-abfb-a3bbda8c01c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.026519 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.148847 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.163844 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ef26f87-2d73-4847-abfb-a3bbda8c01c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.239255 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.239508 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.249108 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.262086 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.268889 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.269218 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:16.269204992 +0000 UTC m=+1398.346038021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.285014 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.285231 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" containerID="cri-o://421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.312181 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.314668 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/ovsdbserver-nb/0.log" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.314696 4902 generic.go:334] "Generic (PLEG): container finished" podID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.314997 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035bb03b-fb8e-4b30-a30f-bfde97b03291" path="/var/lib/kubelet/pods/035bb03b-fb8e-4b30-a30f-bfde97b03291/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.316096 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095a6aec-1aa5-4754-818a-bbe7eedad9f2" path="/var/lib/kubelet/pods/095a6aec-1aa5-4754-818a-bbe7eedad9f2/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.316382 4902 generic.go:334] "Generic (PLEG): container finished" podID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerID="090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.317115 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ef0c45-4c21-4824-850e-545f66a2c20a" path="/var/lib/kubelet/pods/15ef0c45-4c21-4824-850e-545f66a2c20a/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.318060 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22787b52-e166-415c-906e-788b1b73ccd0" path="/var/lib/kubelet/pods/22787b52-e166-415c-906e-788b1b73ccd0/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.318396 4902 generic.go:334] "Generic (PLEG): container finished" podID="561efc1e-a930-440f-83b1-a75217a11f32" containerID="b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.319520 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfed335-1a3f-4e42-b593-e5958039dadc" path="/var/lib/kubelet/pods/3dfed335-1a3f-4e42-b593-e5958039dadc/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.320685 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6d6225-3f7d-485d-a384-5f0e53c3055d" path="/var/lib/kubelet/pods/4c6d6225-3f7d-485d-a384-5f0e53c3055d/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.321011 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4168bc0-26cf-4786-9e28-95647462c372" containerID="baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.322165 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d" path="/var/lib/kubelet/pods/56dceeb6-ebc6-44b8-aba5-5f203f1a8d5d/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.322749 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.324345 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cb92f1-5dde-4389-a5c8-1c0f76b1478d" path="/var/lib/kubelet/pods/86cb92f1-5dde-4389-a5c8-1c0f76b1478d/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.325139 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3ab19a-d650-41ea-aadd-8ec73ed824f2" path="/var/lib/kubelet/pods/8f3ab19a-d650-41ea-aadd-8ec73ed824f2/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.326199 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9959d508-3783-403a-bdd6-65159821fc9e" path="/var/lib/kubelet/pods/9959d508-3783-403a-bdd6-65159821fc9e/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.327601 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a55b324-126b-4571-a2ab-1ea8005e3c46" path="/var/lib/kubelet/pods/9a55b324-126b-4571-a2ab-1ea8005e3c46/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.328380 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb2fcf0-980e-418a-b776-ec7836101d6b" path="/var/lib/kubelet/pods/acb2fcf0-980e-418a-b776-ec7836101d6b/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.329244 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9277be-e557-4d2e-b799-8fc6def975b9" path="/var/lib/kubelet/pods/df9277be-e557-4d2e-b799-8fc6def975b9/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.329910 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef10c95-ed5c-4479-b01f-8f956d478dcf" path="/var/lib/kubelet/pods/eef10c95-ed5c-4479-b01f-8f956d478dcf/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.331090 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dd3ace-42a8-4c8e-8531-0c04f145a002" path="/var/lib/kubelet/pods/f5dd3ace-42a8-4c8e-8531-0c04f145a002/volumes" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.352195 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.358556 4902 generic.go:334] "Generic (PLEG): container finished" podID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" exitCode=137 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.358993 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.361750 4902 generic.go:334] "Generic (PLEG): container finished" podID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerID="c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368143 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerDied","Data":"9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368161 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368178 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368193 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3","Type":"ContainerDied","Data":"2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368203 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa4acbc229f26d07119b0fd5c43c50281090a6fcc6e1442dc8b7ca5938b7ddb" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368214 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerDied","Data":"090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368227 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2kxkv"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerDied","Data":"b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerDied","Data":"baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368266 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" event={"ID":"5ef26f87-2d73-4847-abfb-a3bbda8c01c6","Type":"ContainerDied","Data":"e36154beae48e47217e600b25e3832ce07f5b5cba75bd916fc8d19d2d77082ca"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368290 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerDied","Data":"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368301 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368316 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerDied","Data":"c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368328 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lrj4d"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.368478 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" containerID="cri-o://357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.369879 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" containerID="cri-o://d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" gracePeriod=604800 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.370181 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" gracePeriod=30 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.370250 4902 scope.go:117] "RemoveContainer" containerID="193c2ec1f234088f5b0bf3f8d841b9715ab506a6f64990bd75f4173da10330ef" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371106 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371138 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371229 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.371354 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") pod \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\" (UID: \"b14dfbd1-cf80-4ba8-9372-ca5767f5d689\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386351 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386379 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386417 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.386441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.389366 4902 generic.go:334] "Generic (PLEG): container finished" podID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.389411 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerDied","Data":"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.391669 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67" (OuterVolumeSpecName: "kube-api-access-s5x67") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "kube-api-access-s5x67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.392432 4902 generic.go:334] "Generic (PLEG): container finished" podID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.392476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerDied","Data":"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.393754 4902 generic.go:334] "Generic (PLEG): container finished" podID="db4d047b-49f4-4b55-a053-081f1be632b7" containerID="d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f" exitCode=143 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.393782 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerDied","Data":"d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.397536 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerID="2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6" exitCode=0 Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.397581 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerDied","Data":"2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.400601 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c27gh" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.401221 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kxwsm" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.401506 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kxwsm" event={"ID":"e8135258-f03d-4c9a-be6f-7dd1dd099188","Type":"ContainerDied","Data":"4abe7b149b5deee49487446d44f9ad3581d14a3d2ca4cc34cd11e6b49541512c"} Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.408652 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/ovsdbserver-nb/0.log" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.408728 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.442661 4902 scope.go:117] "RemoveContainer" containerID="462406faba8c1d9f8c0864988f3185e2594f2024aa4406a8b2fa2099a7006d0c" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.457615 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.471563 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_55191d4e-0310-4e6a-a10c-902e0cc8a209/ovsdbserver-sb/0.log" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.471644 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.474425 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5x67\" (UniqueName: \"kubernetes.io/projected/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-kube-api-access-s5x67\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.476951 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.484268 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.496154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b14dfbd1-cf80-4ba8-9372-ca5767f5d689" (UID: "b14dfbd1-cf80-4ba8-9372-ca5767f5d689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.496214 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.503146 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kxwsm"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.510878 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.526745 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-c27gh"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.539861 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.544187 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.551361 4902 scope.go:117] "RemoveContainer" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.556380 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "barbican" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="barbican" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.557617 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-9bb1-account-create-update-dbdlg" podUID="5a189ccd-729c-4453-8adf-7ef08834d320" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.558453 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.560261 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zbrd5" podUID="8e00e8be-96f7-4457-821f-440694bd8692" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580050 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580111 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580203 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580229 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580290 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.580334 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581539 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581635 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") pod \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\" (UID: \"caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581671 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.581689 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.582220 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.582282 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") pod \"55191d4e-0310-4e6a-a10c-902e0cc8a209\" (UID: \"55191d4e-0310-4e6a-a10c-902e0cc8a209\") " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.585307 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.585338 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14dfbd1-cf80-4ba8-9372-ca5767f5d689-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.586212 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts" (OuterVolumeSpecName: "scripts") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.588125 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.588984 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts" (OuterVolumeSpecName: "scripts") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.591841 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config" (OuterVolumeSpecName: "config") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.592401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.593800 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s" (OuterVolumeSpecName: "kube-api-access-g9q6s") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "kube-api-access-g9q6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.594239 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config" (OuterVolumeSpecName: "config") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.619108 4902 scope.go:117] "RemoveContainer" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.620205 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a\": container with ID starting with c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a not found: ID does not exist" containerID="c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.620274 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a"} err="failed to get container status \"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a\": rpc error: code = NotFound desc = could not find container \"c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a\": container with ID starting with c3136e4ad34aa1ed13927876b4fa6d4fd6063bfd87d28178ef6e44c14e4cf73a not found: ID does not exist" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.620300 4902 scope.go:117] "RemoveContainer" containerID="339126d2349790760c7b3087cf9fa15cd976581645c959f56ddb41d46b290f7c" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.620396 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.627518 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs" (OuterVolumeSpecName: "kube-api-access-5d8zs") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "kube-api-access-5d8zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.630618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691888 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d8zs\" (UniqueName: \"kubernetes.io/projected/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-kube-api-access-5d8zs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691910 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691918 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691926 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55191d4e-0310-4e6a-a10c-902e0cc8a209-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691934 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691943 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9q6s\" (UniqueName: \"kubernetes.io/projected/55191d4e-0310-4e6a-a10c-902e0cc8a209-kube-api-access-g9q6s\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691962 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691970 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691978 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.691989 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.721551 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.749244 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.750503 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.755304 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.760562 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.764600 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "nova_api" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="nova_api" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.767137 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-6f87-account-create-update-rx9dv" podUID="8c005e52-f6e5-413f-ba23-cb99e461cb66" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.775737 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.775789 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.793187 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.798000 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.799698 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.811829 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:14 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: if [ -n "cinder" ]; then Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="cinder" Jan 21 14:57:14 crc kubenswrapper[4902]: else Jan 21 14:57:14 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:14 crc kubenswrapper[4902]: fi Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:14 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:14 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:14 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:14 crc kubenswrapper[4902]: # support updates Jan 21 14:57:14 crc kubenswrapper[4902]: Jan 21 14:57:14 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:14 crc kubenswrapper[4902]: E0121 14:57:14.815262 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-7226-account-create-update-dvfjh" podUID="6a8bdead-378c-4db8-acfe-a0b449c69e8a" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.819960 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.823007 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.867257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.880252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.892454 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "55191d4e-0310-4e6a-a10c-902e0cc8a209" (UID: "55191d4e-0310-4e6a-a10c-902e0cc8a209"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.893826 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.898082 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" (UID: "caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901345 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901375 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901386 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901395 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901404 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: I0121 14:57:14.901412 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55191d4e-0310-4e6a-a10c-902e0cc8a209-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:14 crc kubenswrapper[4902]: W0121 14:57:14.966276 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a61b8e1_9a04_429b_9439_bee181301046.slice/crio-c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23 WatchSource:0}: Error finding container c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23: Status 404 returned error can't find the container with id c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.010350 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7887695489-rtxbl" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.018491 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:57:15 crc kubenswrapper[4902]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: if [ -n "nova_cell0" ]; then Jan 21 14:57:15 crc kubenswrapper[4902]: GRANT_DATABASE="nova_cell0" Jan 21 14:57:15 crc kubenswrapper[4902]: else Jan 21 14:57:15 crc kubenswrapper[4902]: GRANT_DATABASE="*" Jan 21 14:57:15 crc kubenswrapper[4902]: fi Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: # going for maximum compatibility here: Jan 21 14:57:15 crc kubenswrapper[4902]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:57:15 crc kubenswrapper[4902]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:57:15 crc kubenswrapper[4902]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:57:15 crc kubenswrapper[4902]: # support updates Jan 21 14:57:15 crc kubenswrapper[4902]: Jan 21 14:57:15 crc kubenswrapper[4902]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.019666 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" podUID="1a61b8e1-9a04-429b-9439-bee181301046" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.110342 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.110562 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54bc9cbc97-hx966" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" containerID="cri-o://03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" gracePeriod=30 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.110931 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-54bc9cbc97-hx966" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" containerID="cri-o://2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" gracePeriod=30 Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.112415 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.112481 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:19.112467602 +0000 UTC m=+1401.189300631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.419282 4902 generic.go:334] "Generic (PLEG): container finished" podID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.419399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerDied","Data":"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.437188 4902 generic.go:334] "Generic (PLEG): container finished" podID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerID="51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.437293 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerDied","Data":"51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.452013 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-dbdlg" event={"ID":"5a189ccd-729c-4453-8adf-7ef08834d320","Type":"ContainerStarted","Data":"1120906563f2c993fa9342e6037fa08fad280845ccc8b73576e20887d9536a97"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474614 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_55191d4e-0310-4e6a-a10c-902e0cc8a209/ovsdbserver-sb/0.log" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474806 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"55191d4e-0310-4e6a-a10c-902e0cc8a209","Type":"ContainerDied","Data":"7b6bfe3f7296114e25ecf2caceede712b35695e06d9545a4b2270d1cce053ea2"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474876 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.474879 4902 scope.go:117] "RemoveContainer" containerID="f33529c27085ffa8a5953825706b4cb4672e9bfd551a411eede0445f1ce65803" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.478210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-dvfjh" event={"ID":"6a8bdead-378c-4db8-acfe-a0b449c69e8a","Type":"ContainerStarted","Data":"f20441eedce16d5292ec3a928996e2d617be39de9f96a7561e77ab7123507595"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527646 4902 generic.go:334] "Generic (PLEG): container finished" podID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerID="6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527758 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerDied","Data":"6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"94031dcf-9569-4cf1-90a9-61c962434ae8","Type":"ContainerDied","Data":"0e2225caf36121574255d90227f9966e2a981074b953f7b34948ace2a7d9beae"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.527812 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2225caf36121574255d90227f9966e2a981074b953f7b34948ace2a7d9beae" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.539201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-rx9dv" event={"ID":"8c005e52-f6e5-413f-ba23-cb99e461cb66","Type":"ContainerStarted","Data":"556a8a0278b939a9c8dbd44294ce0e229928d9c06e0c74dd319ffc8da0bf47de"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.544683 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zbrd5" event={"ID":"8e00e8be-96f7-4457-821f-440694bd8692","Type":"ContainerStarted","Data":"3e4a5f1b1b650dc3abf20ac520f3ebf17eba8a0b1800cb370ba3510c73dd619b"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.588106 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" event={"ID":"1a61b8e1-9a04-429b-9439-bee181301046","Type":"ContainerStarted","Data":"c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.592538 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.599022 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602182 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602303 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602397 4902 generic.go:334] "Generic (PLEG): container finished" podID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerID="669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c" exitCode=0 Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.602440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerDied","Data":"669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c"} Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.627731 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.631423 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.631581 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.631662 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.632586 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635687 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635794 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635871 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.635931 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636194 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.636974 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/58b4678d-e59b-49d1-b06e-338a42a0e51e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.639934 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.648268 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d" (OuterVolumeSpecName: "kube-api-access-9x88d") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "kube-api-access-9x88d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.650768 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.655858 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts" (OuterVolumeSpecName: "scripts") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.659727 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669460 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669573 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94" (OuterVolumeSpecName: "kube-api-access-5cs94") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "kube-api-access-5cs94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.669899 4902 scope.go:117] "RemoveContainer" containerID="00bf7a3928a19891dd7e4eeb9d6cbd183d170218b09cf88bac1204f77dcea9f1" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.709219 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.721979 4902 scope.go:117] "RemoveContainer" containerID="c7dbc8dbff5390b63de46436cbdf0b7cd9f0cbbc930ab3a08d07d477a6d55001" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.738650 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.738696 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") pod \"94031dcf-9569-4cf1-90a9-61c962434ae8\" (UID: \"94031dcf-9569-4cf1-90a9-61c962434ae8\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.738727 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") pod \"58b4678d-e59b-49d1-b06e-338a42a0e51e\" (UID: \"58b4678d-e59b-49d1-b06e-338a42a0e51e\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.741507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.741557 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "94031dcf-9569-4cf1-90a9-61c962434ae8" (UID: "94031dcf-9569-4cf1-90a9-61c962434ae8"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752579 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752618 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cs94\" (UniqueName: \"kubernetes.io/projected/94031dcf-9569-4cf1-90a9-61c962434ae8-kube-api-access-5cs94\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752628 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752638 4902 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752646 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752657 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94031dcf-9569-4cf1-90a9-61c962434ae8-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752666 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752674 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x88d\" (UniqueName: \"kubernetes.io/projected/58b4678d-e59b-49d1-b06e-338a42a0e51e-kube-api-access-9x88d\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752682 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94031dcf-9569-4cf1-90a9-61c962434ae8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752690 4902 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94031dcf-9569-4cf1-90a9-61c962434ae8-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.752698 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.762331 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.762862 4902 scope.go:117] "RemoveContainer" containerID="669110a27652bb9b7b8004db550a35eb0dceaedaf48edf3ca2483cc2449bc57c" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.778545 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.799241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810288 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810728 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="mysql-bootstrap" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810747 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="mysql-bootstrap" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810757 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810766 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810782 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810788 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810797 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810802 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810812 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810819 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810838 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="init" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810844 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="init" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810855 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810863 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810876 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810884 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810897 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810903 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810917 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810922 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810933 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810939 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810949 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810956 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" Jan 21 14:57:15 crc kubenswrapper[4902]: E0121 14:57:15.810967 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.810975 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811165 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811181 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="cinder-scheduler" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811195 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="ovsdbserver-sb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811204 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811217 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811224 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" containerName="galera" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811230 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" containerName="openstack-network-exporter" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811238 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" containerName="dnsmasq-dns" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811244 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" containerName="ovsdbserver-nb" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811252 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" containerName="probe" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811262 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" containerName="ovn-controller" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811899 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.811981 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.814613 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.825202 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.837822 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.853793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.853857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.853982 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854006 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854037 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") pod \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\" (UID: \"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044\") " Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854459 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854604 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.854618 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.874269 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data" (OuterVolumeSpecName: "config-data") pod "58b4678d-e59b-49d1-b06e-338a42a0e51e" (UID: "58b4678d-e59b-49d1-b06e-338a42a0e51e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.882869 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c" (OuterVolumeSpecName: "kube-api-access-4mw6c") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "kube-api-access-4mw6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.906356 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.924553 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.939299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data" (OuterVolumeSpecName: "config-data") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.944497 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" (UID: "d59e8c8f-5bf6-4dd1-835a-b2ed93e81044"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956118 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956268 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956355 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956378 4902 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956392 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mw6c\" (UniqueName: \"kubernetes.io/projected/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-kube-api-access-4mw6c\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956403 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956414 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b4678d-e59b-49d1-b06e-338a42a0e51e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.956424 4902 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.968122 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:15 crc kubenswrapper[4902]: I0121 14:57:15.986645 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"root-account-create-update-2c87v\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.089115 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.143892 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.159760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") pod \"5a189ccd-729c-4453-8adf-7ef08834d320\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.159941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") pod \"5a189ccd-729c-4453-8adf-7ef08834d320\" (UID: \"5a189ccd-729c-4453-8adf-7ef08834d320\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.160457 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a189ccd-729c-4453-8adf-7ef08834d320" (UID: "5a189ccd-729c-4453-8adf-7ef08834d320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.166379 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c" (OuterVolumeSpecName: "kube-api-access-bx42c") pod "5a189ccd-729c-4453-8adf-7ef08834d320" (UID: "5a189ccd-729c-4453-8adf-7ef08834d320"). InnerVolumeSpecName "kube-api-access-bx42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.262142 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a189ccd-729c-4453-8adf-7ef08834d320-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.262484 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx42c\" (UniqueName: \"kubernetes.io/projected/5a189ccd-729c-4453-8adf-7ef08834d320-kube-api-access-bx42c\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.314289 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169597ed-1e1f-490a-8d17-0d6520ae39d1" path="/var/lib/kubelet/pods/169597ed-1e1f-490a-8d17-0d6520ae39d1/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.315124 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55191d4e-0310-4e6a-a10c-902e0cc8a209" path="/var/lib/kubelet/pods/55191d4e-0310-4e6a-a10c-902e0cc8a209/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.316500 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8891f80f-6cb0-4dc6-9f92-836d465e1c84" path="/var/lib/kubelet/pods/8891f80f-6cb0-4dc6-9f92-836d465e1c84/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.317638 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14dfbd1-cf80-4ba8-9372-ca5767f5d689" path="/var/lib/kubelet/pods/b14dfbd1-cf80-4ba8-9372-ca5767f5d689/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.318275 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3" path="/var/lib/kubelet/pods/caf5f1ad-bafb-4a54-b8fd-503d1a3a5fd3/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.319735 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8135258-f03d-4c9a-be6f-7dd1dd099188" path="/var/lib/kubelet/pods/e8135258-f03d-4c9a-be6f-7dd1dd099188/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.321712 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ab900b-a76f-495c-a309-f597e2d835a8" path="/var/lib/kubelet/pods/f6ab900b-a76f-495c-a309-f597e2d835a8/volumes" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.354478 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.364615 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a8bdead-378c-4db8-acfe-a0b449c69e8a" (UID: "6a8bdead-378c-4db8-acfe-a0b449c69e8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.364031 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") pod \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.365598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") pod \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\" (UID: \"6a8bdead-378c-4db8-acfe-a0b449c69e8a\") " Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.366642 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.366936 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:20.366880477 +0000 UTC m=+1402.443713506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.367216 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a8bdead-378c-4db8-acfe-a0b449c69e8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.390389 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl" (OuterVolumeSpecName: "kube-api-access-n8zjl") pod "6a8bdead-378c-4db8-acfe-a0b449c69e8a" (UID: "6a8bdead-378c-4db8-acfe-a0b449c69e8a"). InnerVolumeSpecName "kube-api-access-n8zjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.422883 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.443481 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.445004 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.445088 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.449690 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.449743 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.453192 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.458507 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.469776 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.471414 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.471472 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472461 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") pod \"1a61b8e1-9a04-429b-9439-bee181301046\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472524 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") pod \"8e00e8be-96f7-4457-821f-440694bd8692\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472553 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") pod \"8e00e8be-96f7-4457-821f-440694bd8692\" (UID: \"8e00e8be-96f7-4457-821f-440694bd8692\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472572 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") pod \"8c005e52-f6e5-413f-ba23-cb99e461cb66\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472587 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") pod \"8c005e52-f6e5-413f-ba23-cb99e461cb66\" (UID: \"8c005e52-f6e5-413f-ba23-cb99e461cb66\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.472896 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8zjl\" (UniqueName: \"kubernetes.io/projected/6a8bdead-378c-4db8-acfe-a0b449c69e8a-kube-api-access-n8zjl\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.477473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e00e8be-96f7-4457-821f-440694bd8692" (UID: "8e00e8be-96f7-4457-821f-440694bd8692"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.480325 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c005e52-f6e5-413f-ba23-cb99e461cb66" (UID: "8c005e52-f6e5-413f-ba23-cb99e461cb66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.485709 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr" (OuterVolumeSpecName: "kube-api-access-rvjnr") pod "8c005e52-f6e5-413f-ba23-cb99e461cb66" (UID: "8c005e52-f6e5-413f-ba23-cb99e461cb66"). InnerVolumeSpecName "kube-api-access-rvjnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.486445 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv" (OuterVolumeSpecName: "kube-api-access-p7gjv") pod "8e00e8be-96f7-4457-821f-440694bd8692" (UID: "8e00e8be-96f7-4457-821f-440694bd8692"). InnerVolumeSpecName "kube-api-access-p7gjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.490005 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.490201 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f" (OuterVolumeSpecName: "kube-api-access-cln9f") pod "1a61b8e1-9a04-429b-9439-bee181301046" (UID: "1a61b8e1-9a04-429b-9439-bee181301046"). InnerVolumeSpecName "kube-api-access-cln9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573321 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") pod \"1a61b8e1-9a04-429b-9439-bee181301046\" (UID: \"1a61b8e1-9a04-429b-9439-bee181301046\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573778 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573818 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573874 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573937 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.573974 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a61b8e1-9a04-429b-9439-bee181301046" (UID: "1a61b8e1-9a04-429b-9439-bee181301046"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574213 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cln9f\" (UniqueName: \"kubernetes.io/projected/1a61b8e1-9a04-429b-9439-bee181301046-kube-api-access-cln9f\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574226 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7gjv\" (UniqueName: \"kubernetes.io/projected/8e00e8be-96f7-4457-821f-440694bd8692-kube-api-access-p7gjv\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574234 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e00e8be-96f7-4457-821f-440694bd8692-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574243 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c005e52-f6e5-413f-ba23-cb99e461cb66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574252 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvjnr\" (UniqueName: \"kubernetes.io/projected/8c005e52-f6e5-413f-ba23-cb99e461cb66-kube-api-access-rvjnr\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574260 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61b8e1-9a04-429b-9439-bee181301046-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.574500 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.578724 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.584672 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn" (OuterVolumeSpecName: "kube-api-access-msjhn") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "kube-api-access-msjhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.636537 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.645245 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerID="29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e" exitCode=0 Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.645409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerDied","Data":"29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.645521 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data" (OuterVolumeSpecName: "config-data") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.647476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9bb1-account-create-update-dbdlg" event={"ID":"5a189ccd-729c-4453-8adf-7ef08834d320","Type":"ContainerDied","Data":"1120906563f2c993fa9342e6037fa08fad280845ccc8b73576e20887d9536a97"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.647521 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9bb1-account-create-update-dbdlg" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.653299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.654656 4902 generic.go:334] "Generic (PLEG): container finished" podID="db4d047b-49f4-4b55-a053-081f1be632b7" containerID="4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674" exitCode=0 Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.654736 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerDied","Data":"4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.662678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"58b4678d-e59b-49d1-b06e-338a42a0e51e","Type":"ContainerDied","Data":"83d1b2eb20981f2a9a2a1eda26c8252ba222ee4a68dd3f7546c40138c8e10370"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.662759 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.665815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6f87-account-create-update-rx9dv" event={"ID":"8c005e52-f6e5-413f-ba23-cb99e461cb66","Type":"ContainerDied","Data":"556a8a0278b939a9c8dbd44294ce0e229928d9c06e0c74dd319ffc8da0bf47de"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.665919 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6f87-account-create-update-rx9dv" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.668949 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zbrd5" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.669007 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zbrd5" event={"ID":"8e00e8be-96f7-4457-821f-440694bd8692","Type":"ContainerDied","Data":"3e4a5f1b1b650dc3abf20ac520f3ebf17eba8a0b1800cb370ba3510c73dd619b"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.670899 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d59e8c8f-5bf6-4dd1-835a-b2ed93e81044","Type":"ContainerDied","Data":"140924a047cb28624865b0efcf1a901932347a50fbd34bbfa1c4027f44fbc891"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.670941 4902 scope.go:117] "RemoveContainer" containerID="51f3e0557ba29d0e459dc32f45c40c004e66a2616c90bcc78b93663bdae1ff99" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.671087 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675264 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") pod \"3389852b-01f7-4dc9-b7c2-73c858ba1268\" (UID: \"3389852b-01f7-4dc9-b7c2-73c858ba1268\") " Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675763 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675784 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675797 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675809 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675821 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjhn\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-kube-api-access-msjhn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.675835 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3389852b-01f7-4dc9-b7c2-73c858ba1268-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.676119 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7226-account-create-update-dvfjh" event={"ID":"6a8bdead-378c-4db8-acfe-a0b449c69e8a","Type":"ContainerDied","Data":"f20441eedce16d5292ec3a928996e2d617be39de9f96a7561e77ab7123507595"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.676179 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7226-account-create-update-dvfjh" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.677323 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.679436 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3389852b-01f7-4dc9-b7c2-73c858ba1268" (UID: "3389852b-01f7-4dc9-b7c2-73c858ba1268"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.679470 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" event={"ID":"1a61b8e1-9a04-429b-9439-bee181301046","Type":"ContainerDied","Data":"c1a2c01a0ef6d99a3d9e8bb7f28626beef1ad4446ccdbb5bcacd934ce87ecd23"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.679488 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7df7-account-create-update-vrg52" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.701410 4902 generic.go:334] "Generic (PLEG): container finished" podID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" exitCode=0 Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.701496 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.702232 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-54bc9cbc97-hx966" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.702372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerDied","Data":"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb"} Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.702422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-54bc9cbc97-hx966" event={"ID":"3389852b-01f7-4dc9-b7c2-73c858ba1268","Type":"ContainerDied","Data":"e83ca63bcfd9328da7616c6b5c09b31fc0bd4751ea531f09a2e1f38c1a7f3d76"} Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.750876 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.752304 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.753427 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.753455 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.766640 4902 scope.go:117] "RemoveContainer" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.768375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.778721 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9bb1-account-create-update-dbdlg"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.781020 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3389852b-01f7-4dc9-b7c2-73c858ba1268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.781055 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3389852b-01f7-4dc9-b7c2-73c858ba1268-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.785481 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.816360 4902 scope.go:117] "RemoveContainer" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.832271 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.944728 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949076 4902 scope.go:117] "RemoveContainer" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.949518 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb\": container with ID starting with 2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb not found: ID does not exist" containerID="2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949551 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb"} err="failed to get container status \"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb\": rpc error: code = NotFound desc = could not find container \"2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb\": container with ID starting with 2eb107445a25258a1fa2e35b1be219980fe74dbad1b6918f323e2da2129133fb not found: ID does not exist" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949582 4902 scope.go:117] "RemoveContainer" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" Jan 21 14:57:16 crc kubenswrapper[4902]: E0121 14:57:16.949812 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398\": container with ID starting with 03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398 not found: ID does not exist" containerID="03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.949837 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398"} err="failed to get container status \"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398\": rpc error: code = NotFound desc = could not find container \"03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398\": container with ID starting with 03335e8a3fefd8d92b70e9a03a08f2d89aeda94dee615666c9070892a8ef0398 not found: ID does not exist" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.952316 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7226-account-create-update-dvfjh"] Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.977669 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:54538->10.217.0.203:8775: read: connection reset by peer" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.980722 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:54536->10.217.0.203:8775: read: connection reset by peer" Jan 21 14:57:16 crc kubenswrapper[4902]: I0121 14:57:16.980944 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:16.999007 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6f87-account-create-update-rx9dv"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.021486 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.035921 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zbrd5"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.043314 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.050911 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.061469 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.075661 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7df7-account-create-update-vrg52"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.083751 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.092691 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-54bc9cbc97-hx966"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.101262 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.105248 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.116468 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.285585 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.285899 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" containerID="cri-o://49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.286716 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" containerID="cri-o://d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.286803 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" containerID="cri-o://91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.286827 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" containerID="cri-o://c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.307067 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.359202 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.359421 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" containerID="cri-o://af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" gracePeriod=30 Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.361506 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.387341 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411353 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411398 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411456 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411476 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411501 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411533 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411553 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411601 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411689 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411710 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.411732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412789 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412814 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412836 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") pod \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\" (UID: \"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412949 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") pod \"b71fc896-318c-4277-bb32-70e3424a26c9\" (UID: \"b71fc896-318c-4277-bb32-70e3424a26c9\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.412975 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") pod \"db4d047b-49f4-4b55-a053-081f1be632b7\" (UID: \"db4d047b-49f4-4b55-a053-081f1be632b7\") " Jan 21 14:57:17 crc kubenswrapper[4902]: I0121 14:57:17.460618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs" (OuterVolumeSpecName: "logs") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.465414 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs" (OuterVolumeSpecName: "logs") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.471901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs" (OuterVolumeSpecName: "logs") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.472190 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.478185 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519652 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4d047b-49f4-4b55-a053-081f1be632b7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519676 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b71fc896-318c-4277-bb32-70e3424a26c9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519687 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519694 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.519702 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db4d047b-49f4-4b55-a053-081f1be632b7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.608348 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.693102 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.693352 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" containerID="cri-o://c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" gracePeriod=30 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.760379 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b2af-account-create-update-g4dvb"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.784765 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d" (OuterVolumeSpecName: "kube-api-access-btb6d") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "kube-api-access-btb6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.792259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2c87v" event={"ID":"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2","Type":"ContainerStarted","Data":"24e8d59ec3c64b717babfef7f378c16dbc7782ee7d0c22d80830c614a5f49681"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.798846 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799303 4902 generic.go:334] "Generic (PLEG): container finished" podID="561efc1e-a930-440f-83b1-a75217a11f32" containerID="709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerDied","Data":"709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799476 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7" (OuterVolumeSpecName: "kube-api-access-xtwj7") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "kube-api-access-xtwj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.799895 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.805771 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x" (OuterVolumeSpecName: "kube-api-access-6nm9x") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "kube-api-access-6nm9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.806159 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts" (OuterVolumeSpecName: "scripts") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.807518 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.807973 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808000 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808018 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808025 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808108 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808118 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808128 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808134 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808145 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808150 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808181 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808187 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808196 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808201 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.808215 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808424 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808435 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808446 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808456 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" containerName="cinder-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808466 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-httpd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808498 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" containerName="proxy-server" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808510 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" containerName="glance-log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.808519 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" containerName="placement-api" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.809430 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.817166 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.817655 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.818054 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts" (OuterVolumeSpecName: "scripts") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.819278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts" (OuterVolumeSpecName: "scripts") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.823500 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841050 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841296 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841307 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwj7\" (UniqueName: \"kubernetes.io/projected/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-kube-api-access-xtwj7\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841317 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841356 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841375 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841385 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nm9x\" (UniqueName: \"kubernetes.io/projected/db4d047b-49f4-4b55-a053-081f1be632b7-kube-api-access-6nm9x\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.841394 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btb6d\" (UniqueName: \"kubernetes.io/projected/b71fc896-318c-4277-bb32-70e3424a26c9-kube-api-access-btb6d\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.852026 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-c6zzp"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.854172 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863335 4902 generic.go:334] "Generic (PLEG): container finished" podID="b71fc896-318c-4277-bb32-70e3424a26c9" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863415 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerDied","Data":"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863441 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ddf9d8f68-jjk7f" event={"ID":"b71fc896-318c-4277-bb32-70e3424a26c9","Type":"ContainerDied","Data":"31d5a67184f80e0f8e30cfab691135f2f1fd9f01d89fed99d676f711a03521eb"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863457 4902 scope.go:117] "RemoveContainer" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.863584 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ddf9d8f68-jjk7f" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.873985 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.875009 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" exitCode=2 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.875074 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.879100 4902 generic.go:334] "Generic (PLEG): container finished" podID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerID="6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.879167 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerDied","Data":"6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.881361 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4168bc0-26cf-4786-9e28-95647462c372" containerID="635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0" exitCode=0 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.881409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerDied","Data":"635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.884160 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.884172 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"db4d047b-49f4-4b55-a053-081f1be632b7","Type":"ContainerDied","Data":"cf192cd4c08d4018b743f3dc19c0686fe97811bb3b64651346fb935eec9339db"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.890757 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff41f7d4-e15a-4fc3-afd9-5d86fe05768f","Type":"ContainerDied","Data":"fab7af2822b1e0c413efff882a4ddbb2ff2b86596095fd2bcec07bee48c5bf19"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.890867 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.895566 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8n66z"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.914230 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.914490 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5684459db4-jgdkj" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" containerID="cri-o://ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e" gracePeriod=30 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.923541 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.930453 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.937246 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.940765 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.949398 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.949674 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.950079 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.950095 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.950105 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.956422 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.956577 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bdp9p"] Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.950276 4902 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.962199 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:18.462163568 +0000 UTC m=+1400.538996597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.962239 4902 projected.go:194] Error preparing data for projected volume kube-api-access-bfjwj for pod openstack/keystone-b2af-account-create-update-852lx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:17.962488 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:18.462478727 +0000 UTC m=+1400.539311746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bfjwj" (UniqueName: "kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:17.984316 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.069560 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.121134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.122477 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.133688 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data" (OuterVolumeSpecName: "config-data") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159518 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159548 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159561 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.159574 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.169844 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data" (OuterVolumeSpecName: "config-data") pod "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" (UID: "ff41f7d4-e15a-4fc3-afd9-5d86fe05768f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.220262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.242235 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db4d047b-49f4-4b55-a053-081f1be632b7" (UID: "db4d047b-49f4-4b55-a053-081f1be632b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.265778 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.265855 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.265887 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4d047b-49f4-4b55-a053-081f1be632b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.273596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.302530 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.321077 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a61b8e1-9a04-429b-9439-bee181301046" path="/var/lib/kubelet/pods/1a61b8e1-9a04-429b-9439-bee181301046/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.321583 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3389852b-01f7-4dc9-b7c2-73c858ba1268" path="/var/lib/kubelet/pods/3389852b-01f7-4dc9-b7c2-73c858ba1268/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.322559 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b4678d-e59b-49d1-b06e-338a42a0e51e" path="/var/lib/kubelet/pods/58b4678d-e59b-49d1-b06e-338a42a0e51e/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.323466 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a189ccd-729c-4453-8adf-7ef08834d320" path="/var/lib/kubelet/pods/5a189ccd-729c-4453-8adf-7ef08834d320/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.323869 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8bdead-378c-4db8-acfe-a0b449c69e8a" path="/var/lib/kubelet/pods/6a8bdead-378c-4db8-acfe-a0b449c69e8a/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.325261 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c005e52-f6e5-413f-ba23-cb99e461cb66" path="/var/lib/kubelet/pods/8c005e52-f6e5-413f-ba23-cb99e461cb66/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.325695 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e00e8be-96f7-4457-821f-440694bd8692" path="/var/lib/kubelet/pods/8e00e8be-96f7-4457-821f-440694bd8692/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.379445 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.379470 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.385927 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f05425e-47d3-4358-844c-9b661f254e22" path="/var/lib/kubelet/pods/8f05425e-47d3-4358-844c-9b661f254e22/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.387099 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94031dcf-9569-4cf1-90a9-61c962434ae8" path="/var/lib/kubelet/pods/94031dcf-9569-4cf1-90a9-61c962434ae8/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.389180 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966f492d-0f8f-4bef-b60f-777f25367104" path="/var/lib/kubelet/pods/966f492d-0f8f-4bef-b60f-777f25367104/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.390383 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb9c4d9-a042-4a60-adca-03be4d8ec42d" path="/var/lib/kubelet/pods/9bb9c4d9-a042-4a60-adca-03be4d8ec42d/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.390676 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data" (OuterVolumeSpecName: "config-data") pod "b71fc896-318c-4277-bb32-70e3424a26c9" (UID: "b71fc896-318c-4277-bb32-70e3424a26c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.391381 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59e8c8f-5bf6-4dd1-835a-b2ed93e81044" path="/var/lib/kubelet/pods/d59e8c8f-5bf6-4dd1-835a-b2ed93e81044/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.392017 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5b13a8-7950-40cf-9255-d2c9f34c6add" path="/var/lib/kubelet/pods/fd5b13a8-7950-40cf-9255-d2c9f34c6add/volumes" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.485134 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.485230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.485290 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b71fc896-318c-4277-bb32-70e3424a26c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.485347 4902 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.485390 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:19.485375195 +0000 UTC m=+1401.562208224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : configmap "openstack-scripts" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.488591 4902 projected.go:194] Error preparing data for projected volume kube-api-access-bfjwj for pod openstack/keystone-b2af-account-create-update-852lx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.488673 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:19.488634603 +0000 UTC m=+1401.565467632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bfjwj" (UniqueName: "kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.511316 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" containerID="cri-o://21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" gracePeriod=30 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.793664 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.829294 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.834459 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bfjwj operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-b2af-account-create-update-852lx" podUID="3bdb84d6-c599-4d87-9c27-cb32ff77d6d9" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.835585 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.848612 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.861061 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.861545 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.865698 4902 scope.go:117] "RemoveContainer" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.884162 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.886995 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.888535 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897506 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897582 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897657 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897677 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897692 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897712 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897820 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897845 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897864 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897890 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897947 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.897992 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.898008 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") pod \"b52494a8-ff56-449e-a274-b37eb4bad43d\" (UID: \"b52494a8-ff56-449e-a274-b37eb4bad43d\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.898025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903665 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903769 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903797 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") pod \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\" (UID: \"c653ffa0-195e-4eda-8c25-cfcff2715bdf\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903834 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903863 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903915 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") pod \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\" (UID: \"3aa6f350-dd82-4d59-ac24-5460acc2a8a6\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") pod \"561efc1e-a930-440f-83b1-a75217a11f32\" (UID: \"561efc1e-a930-440f-83b1-a75217a11f32\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.903972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c4168bc0-26cf-4786-9e28-95647462c372\" (UID: \"c4168bc0-26cf-4786-9e28-95647462c372\") " Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.906091 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.906796 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.909006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs" (OuterVolumeSpecName: "logs") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.916613 4902 scope.go:117] "RemoveContainer" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.917371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs" (OuterVolumeSpecName: "logs") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.922933 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts" (OuterVolumeSpecName: "scripts") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.923311 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.924576 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs" (OuterVolumeSpecName: "logs") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.926263 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn" (OuterVolumeSpecName: "kube-api-access-t4sgn") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "kube-api-access-t4sgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.926533 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.936805 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.937395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7" (OuterVolumeSpecName: "kube-api-access-jvkf7") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "kube-api-access-jvkf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.937761 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd\": container with ID starting with 51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd not found: ID does not exist" containerID="51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.937866 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd"} err="failed to get container status \"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd\": rpc error: code = NotFound desc = could not find container \"51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd\": container with ID starting with 51a3265e649ab4a25fc0fcd701faa5bc02fcc3ac2410d6e8fd4599a2661ff3bd not found: ID does not exist" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.937967 4902 scope.go:117] "RemoveContainer" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.943676 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs" (OuterVolumeSpecName: "logs") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: E0121 14:57:18.945382 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": container with ID starting with bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924 not found: ID does not exist" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.945417 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924"} err="failed to get container status \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": rpc error: code = NotFound desc = could not find container \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": container with ID starting with bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924 not found: ID does not exist" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.945445 4902 scope.go:117] "RemoveContainer" containerID="4c05b52bed8146e4b813b72bd57efca7be3d0268ea82de7f8102940d78d0f674" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.953225 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.964350 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.967509 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj" (OuterVolumeSpecName: "kube-api-access-d74jj") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "kube-api-access-d74jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.972318 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7ddf9d8f68-jjk7f"] Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.974287 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b" (OuterVolumeSpecName: "kube-api-access-kn54b") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "kube-api-access-kn54b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.974972 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2ff2c3d8-2d68-4255-a175-21f0df1b9276/ovn-northd/0.log" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.975017 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" exitCode=139 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.975188 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerDied","Data":"c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.980478 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm" (OuterVolumeSpecName: "kube-api-access-9cgzm") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "kube-api-access-9cgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.986186 4902 generic.go:334] "Generic (PLEG): container finished" podID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerID="507ce1ddbb5ab555237835c4f50235305caf3b3ba28a9df9ec0892a88f2b0f8f" exitCode=1 Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.986267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2c87v" event={"ID":"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2","Type":"ContainerDied","Data":"507ce1ddbb5ab555237835c4f50235305caf3b3ba28a9df9ec0892a88f2b0f8f"} Jan 21 14:57:18 crc kubenswrapper[4902]: I0121 14:57:18.995004 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.000126 4902 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924" err="rpc error: code = NotFound desc = could not find container \"bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924\": container with ID starting with bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.000168 4902 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/placement-7ddf9d8f68-jjk7f_openstack_placement-log-bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924.log: no such file or directory" path="/var/log/containers/placement-7ddf9d8f68-jjk7f_openstack_placement-log-bbc5f1a939cd9ba647e8cd975adfc462d54662803dd6aee96464a00395b24924.log" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003192 4902 generic.go:334] "Generic (PLEG): container finished" podID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003241 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerDied","Data":"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.003311 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755cd77b-nd6p7" event={"ID":"365d6c18-395e-4a62-939d-a04927ffa8aa","Type":"ContainerDied","Data":"d7ec9f34e635f9308b93f9d0dc6cda96b623b10532da8d7eb05383f6117459ce"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.004946 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") pod \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005480 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005518 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") pod \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005644 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005701 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005797 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005867 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") pod \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\" (UID: \"c366100e-d2a0-4be9-965f-ef7b7ad39f78\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.005964 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") pod \"365d6c18-395e-4a62-939d-a04927ffa8aa\" (UID: \"365d6c18-395e-4a62-939d-a04927ffa8aa\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.008911 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018317 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs" (OuterVolumeSpecName: "logs") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018607 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018642 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/561efc1e-a930-440f-83b1-a75217a11f32-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018660 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4sgn\" (UniqueName: \"kubernetes.io/projected/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-api-access-t4sgn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018675 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018687 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018698 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d74jj\" (UniqueName: \"kubernetes.io/projected/c653ffa0-195e-4eda-8c25-cfcff2715bdf-kube-api-access-d74jj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018706 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018715 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018724 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018746 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018756 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cgzm\" (UniqueName: \"kubernetes.io/projected/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-kube-api-access-9cgzm\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018764 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c653ffa0-195e-4eda-8c25-cfcff2715bdf-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018773 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4168bc0-26cf-4786-9e28-95647462c372-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018782 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018790 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/365d6c18-395e-4a62-939d-a04927ffa8aa-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018798 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn54b\" (UniqueName: \"kubernetes.io/projected/c4168bc0-26cf-4786-9e28-95647462c372-kube-api-access-kn54b\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.018807 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvkf7\" (UniqueName: \"kubernetes.io/projected/561efc1e-a930-440f-83b1-a75217a11f32-kube-api-access-jvkf7\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022675 4902 generic.go:334] "Generic (PLEG): container finished" podID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" exitCode=2 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerDied","Data":"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b52494a8-ff56-449e-a274-b37eb4bad43d","Type":"ContainerDied","Data":"b11f8ee0923ff98e0291569b03ef8eeccd15dca9bc3a6e79246d5a184580c3ae"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.022826 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.023471 4902 scope.go:117] "RemoveContainer" containerID="d81b469d4bfe4317399c28b768091ee1e4d32b1ffeb38b5ab40fde67bdde4b7f" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.035904 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v" (OuterVolumeSpecName: "kube-api-access-96b9v") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "kube-api-access-96b9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.047613 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m" (OuterVolumeSpecName: "kube-api-access-lvj2m") pod "c366100e-d2a0-4be9-965f-ef7b7ad39f78" (UID: "c366100e-d2a0-4be9-965f-ef7b7ad39f78"). InnerVolumeSpecName "kube-api-access-lvj2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.050875 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df595696d-2ftxp" event={"ID":"561efc1e-a930-440f-83b1-a75217a11f32","Type":"ContainerDied","Data":"596188194bb88a2f6c89003cb099ac4ba874000a54cf1ceffb7115b26f061225"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.050974 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df595696d-2ftxp" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.052372 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data" (OuterVolumeSpecName: "config-data") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.053924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.060835 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4168bc0-26cf-4786-9e28-95647462c372","Type":"ContainerDied","Data":"7f3690d2641b9d3eb31fce9c2db367653c8289ff406af6ce68593f803e401401"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.060948 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.072365 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa6f350-dd82-4d59-ac24-5460acc2a8a6","Type":"ContainerDied","Data":"34e826e9786b7ad724ed0dc96336ea0075c6129a9fc9742797a8ae0fd3c41773"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.072562 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.091693 4902 scope.go:117] "RemoveContainer" containerID="29a7ab7f1ceb1b7248d2507a5eb6085cbee233d8230ecf775819b6f6ce78389e" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100343 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100376 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100393 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.100437 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102499 4902 generic.go:334] "Generic (PLEG): container finished" podID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102558 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerDied","Data":"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68564cb5c-bh98h" event={"ID":"c653ffa0-195e-4eda-8c25-cfcff2715bdf","Type":"ContainerDied","Data":"0adea585b27eb9363f63f38b86e1f0b5aee1a5b47c7b1b2342897a2515892311"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.102672 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68564cb5c-bh98h" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.106795 4902 generic.go:334] "Generic (PLEG): container finished" podID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" exitCode=0 Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.106870 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.107416 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.107619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerDied","Data":"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.107656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c366100e-d2a0-4be9-965f-ef7b7ad39f78","Type":"ContainerDied","Data":"f71b431a165886dfcb60b7772fbf29ab480085d500ccd4f828f82ea85ca3c58b"} Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.116251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122251 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvj2m\" (UniqueName: \"kubernetes.io/projected/c366100e-d2a0-4be9-965f-ef7b7ad39f78-kube-api-access-lvj2m\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122288 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122303 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96b9v\" (UniqueName: \"kubernetes.io/projected/365d6c18-395e-4a62-939d-a04927ffa8aa-kube-api-access-96b9v\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122316 4902 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.122330 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.122384 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.122428 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data podName:8d7103bd-b24b-4a0c-b68a-17373307f1aa nodeName:}" failed. No retries permitted until 2026-01-21 14:57:27.122415928 +0000 UTC m=+1409.199248957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data") pod "rabbitmq-cell1-server-0" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.142256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "b52494a8-ff56-449e-a274-b37eb4bad43d" (UID: "b52494a8-ff56-449e-a274-b37eb4bad43d"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.152823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.157848 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.157964 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.161251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data" (OuterVolumeSpecName: "config-data") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.174127 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aa6f350-dd82-4d59-ac24-5460acc2a8a6" (UID: "3aa6f350-dd82-4d59-ac24-5460acc2a8a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.190252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data" (OuterVolumeSpecName: "config-data") pod "c653ffa0-195e-4eda-8c25-cfcff2715bdf" (UID: "c653ffa0-195e-4eda-8c25-cfcff2715bdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.194231 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data" (OuterVolumeSpecName: "config-data") pod "c366100e-d2a0-4be9-965f-ef7b7ad39f78" (UID: "c366100e-d2a0-4be9-965f-ef7b7ad39f78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.211650 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.229888 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230225 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230294 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230347 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa6f350-dd82-4d59-ac24-5460acc2a8a6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230401 4902 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b52494a8-ff56-449e-a274-b37eb4bad43d-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230473 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c653ffa0-195e-4eda-8c25-cfcff2715bdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230524 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230720 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.230777 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.232202 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.233805 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data" (OuterVolumeSpecName: "config-data") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.235188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.242395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c366100e-d2a0-4be9-965f-ef7b7ad39f78" (UID: "c366100e-d2a0-4be9-965f-ef7b7ad39f78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.242640 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "561efc1e-a930-440f-83b1-a75217a11f32" (UID: "561efc1e-a930-440f-83b1-a75217a11f32"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.244849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4168bc0-26cf-4786-9e28-95647462c372" (UID: "c4168bc0-26cf-4786-9e28-95647462c372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.258949 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data" (OuterVolumeSpecName: "config-data") pod "365d6c18-395e-4a62-939d-a04927ffa8aa" (UID: "365d6c18-395e-4a62-939d-a04927ffa8aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.280003 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.282027 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.283331 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.283434 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334577 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334617 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/561efc1e-a930-440f-83b1-a75217a11f32-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334629 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c366100e-d2a0-4be9-965f-ef7b7ad39f78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334638 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334650 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334661 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365d6c18-395e-4a62-939d-a04927ffa8aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.334670 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4168bc0-26cf-4786-9e28-95647462c372-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.472599 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.481237 4902 scope.go:117] "RemoveContainer" containerID="11db3a976cf5ea9322be5da7913baf9b9709079192d4b3c588596ad2459819bd" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.497211 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.514201 4902 scope.go:117] "RemoveContainer" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.537778 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") pod \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538012 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") pod \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\" (UID: \"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538188 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538300 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") pod \"keystone-b2af-account-create-update-852lx\" (UID: \"3bdb84d6-c599-4d87-9c27-cb32ff77d6d9\") " pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.538907 4902 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.538954 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:21.53894122 +0000 UTC m=+1403.615774239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : configmap "openstack-scripts" not found Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.538996 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" (UID: "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.543985 4902 scope.go:117] "RemoveContainer" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.544376 4902 projected.go:194] Error preparing data for projected volume kube-api-access-bfjwj for pod openstack/keystone-b2af-account-create-update-852lx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.544440 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj podName:3bdb84d6-c599-4d87-9c27-cb32ff77d6d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:21.544422068 +0000 UTC m=+1403.621255097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bfjwj" (UniqueName: "kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj") pod "keystone-b2af-account-create-update-852lx" (UID: "3bdb84d6-c599-4d87-9c27-cb32ff77d6d9") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.548888 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff" (OuterVolumeSpecName: "kube-api-access-ttmff") pod "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" (UID: "2acfa57e-c4e9-4809-b5cb-109f1bbb64f2"). InnerVolumeSpecName "kube-api-access-ttmff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.550898 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.571408 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2ff2c3d8-2d68-4255-a175-21f0df1b9276/ovn-northd/0.log" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.571505 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.573648 4902 scope.go:117] "RemoveContainer" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.573959 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8\": container with ID starting with c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8 not found: ID does not exist" containerID="c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.573986 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8"} err="failed to get container status \"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8\": rpc error: code = NotFound desc = could not find container \"c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8\": container with ID starting with c01360894597059397e336b9507203d716a7203fc3125810c73230c4e7afdff8 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.574006 4902 scope.go:117] "RemoveContainer" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.577231 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48\": container with ID starting with f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48 not found: ID does not exist" containerID="f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577242 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577266 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577263 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48"} err="failed to get container status \"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48\": rpc error: code = NotFound desc = could not find container \"f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48\": container with ID starting with f91dd750dae0fff65fe0eaae6d224ba3e3fee86b819473f7d78c8dc398d11b48 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.577286 4902 scope.go:117] "RemoveContainer" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.588308 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.595220 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-b755cd77b-nd6p7"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.598146 4902 scope.go:117] "RemoveContainer" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.601222 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723\": container with ID starting with af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723 not found: ID does not exist" containerID="af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.601270 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723"} err="failed to get container status \"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723\": rpc error: code = NotFound desc = could not find container \"af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723\": container with ID starting with af76cdb24e608f2d712d8002bd66649eeba10782b061f4510a2a927ada998723 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.601297 4902 scope.go:117] "RemoveContainer" containerID="709dea640199a3e29bbff0c5bd046ca78f3c55c233e1043ae28cc59e518b7cd2" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.601412 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.606457 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.638783 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639168 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639231 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639260 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639285 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639319 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639356 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639420 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639491 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639617 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639682 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639771 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") pod \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\" (UID: \"2ff2c3d8-2d68-4255-a175-21f0df1b9276\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639815 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") pod \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\" (UID: \"2c70bcdb-316e-4246-b333-ddaf6438c6ee\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.639507 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640124 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config" (OuterVolumeSpecName: "config") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640686 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts" (OuterVolumeSpecName: "scripts") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640743 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640772 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.640793 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttmff\" (UniqueName: \"kubernetes.io/projected/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2-kube-api-access-ttmff\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.641174 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.644099 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5df595696d-2ftxp"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.644775 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data" (OuterVolumeSpecName: "config-data") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.648334 4902 scope.go:117] "RemoveContainer" containerID="b91bda9e24415f053bbf7e3136ae0eb36d0535911dff5c3a69ee2c9fd40feb34" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.648363 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6" (OuterVolumeSpecName: "kube-api-access-j24b6") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "kube-api-access-j24b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.648673 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.650535 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj" (OuterVolumeSpecName: "kube-api-access-5xllj") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "kube-api-access-5xllj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.653916 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.659358 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.666229 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.668959 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.674801 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-68564cb5c-bh98h"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.687897 4902 scope.go:117] "RemoveContainer" containerID="635d235f3800b93dc934010299b8ed6cf8c1efd38064d7aecd2aa2faa2ae46a0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.699494 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.701493 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.702789 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.712194 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.712353 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.716354 4902 scope.go:117] "RemoveContainer" containerID="baf5060a9be38be6557c2e269eeef0d7067b99a8ffc55de9fabcd6c3d7fd4375" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.728722 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.729004 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.735382 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2ff2c3d8-2d68-4255-a175-21f0df1b9276" (UID: "2ff2c3d8-2d68-4255-a175-21f0df1b9276"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742242 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j24b6\" (UniqueName: \"kubernetes.io/projected/2ff2c3d8-2d68-4255-a175-21f0df1b9276-kube-api-access-j24b6\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742269 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742278 4902 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742289 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742298 4902 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742305 4902 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742314 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ff2c3d8-2d68-4255-a175-21f0df1b9276-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742322 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xllj\" (UniqueName: \"kubernetes.io/projected/2c70bcdb-316e-4246-b333-ddaf6438c6ee-kube-api-access-5xllj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742331 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.742339 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ff2c3d8-2d68-4255-a175-21f0df1b9276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.743346 4902 scope.go:117] "RemoveContainer" containerID="6bf1eb34ffb8ebb875ad0db959e31364a4f9a1f5a32e44cce848251c4a780377" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.744389 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "2c70bcdb-316e-4246-b333-ddaf6438c6ee" (UID: "2c70bcdb-316e-4246-b333-ddaf6438c6ee"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.766242 4902 scope.go:117] "RemoveContainer" containerID="090f15138593116ea5509f9b1db81b64387863cddd781c3e2ec064762515d25e" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.784538 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.789445 4902 scope.go:117] "RemoveContainer" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.822951 4902 scope.go:117] "RemoveContainer" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843878 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.843965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844056 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844087 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") pod \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\" (UID: \"0ea9ca5b-2e24-41de-8a99-a882ec11c222\") " Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844406 4902 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c70bcdb-316e-4246-b333-ddaf6438c6ee-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.844398 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs" (OuterVolumeSpecName: "logs") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.852563 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4" (OuterVolumeSpecName: "kube-api-access-xk9n4") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "kube-api-access-xk9n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.857390 4902 scope.go:117] "RemoveContainer" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.865422 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0\": container with ID starting with 5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0 not found: ID does not exist" containerID="5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.865517 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0"} err="failed to get container status \"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0\": rpc error: code = NotFound desc = could not find container \"5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0\": container with ID starting with 5be4c530ff545084569229ccab37f8a3845f061ba816f6f5089f2b73e5d798d0 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.865552 4902 scope.go:117] "RemoveContainer" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.866182 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb\": container with ID starting with 43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb not found: ID does not exist" containerID="43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.866215 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb"} err="failed to get container status \"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb\": rpc error: code = NotFound desc = could not find container \"43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb\": container with ID starting with 43f51510db5fad3c2b3f386cc3a65f8be471fb92981825fc30155953a875e2bb not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.866240 4902 scope.go:117] "RemoveContainer" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.871744 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data" (OuterVolumeSpecName: "config-data") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.876579 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.889368 4902 scope.go:117] "RemoveContainer" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" Jan 21 14:57:19 crc kubenswrapper[4902]: E0121 14:57:19.891499 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812\": container with ID starting with 421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812 not found: ID does not exist" containerID="421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.891543 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812"} err="failed to get container status \"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812\": rpc error: code = NotFound desc = could not find container \"421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812\": container with ID starting with 421a9af59b9cad2fcc1b7e057f30f766db2a2c4527be7fbda00031710c3a7812 not found: ID does not exist" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.895475 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.903829 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0ea9ca5b-2e24-41de-8a99-a882ec11c222" (UID: "0ea9ca5b-2e24-41de-8a99-a882ec11c222"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948640 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948931 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ea9ca5b-2e24-41de-8a99-a882ec11c222-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948968 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948979 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.948993 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea9ca5b-2e24-41de-8a99-a882ec11c222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:19 crc kubenswrapper[4902]: I0121 14:57:19.949001 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk9n4\" (UniqueName: \"kubernetes.io/projected/0ea9ca5b-2e24-41de-8a99-a882ec11c222-kube-api-access-xk9n4\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124004 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" exitCode=0 Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124107 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerDied","Data":"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124158 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ea9ca5b-2e24-41de-8a99-a882ec11c222","Type":"ContainerDied","Data":"6a0fa8e1aa73ccaec735410bd00188e5105d8445c279b0829562f3033236ffec"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124186 4902 scope.go:117] "RemoveContainer" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.124350 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135007 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" exitCode=0 Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135107 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135209 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerDied","Data":"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.135234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2c70bcdb-316e-4246-b333-ddaf6438c6ee","Type":"ContainerDied","Data":"012af9c88121ed6a56a653b1c142d5e67759c3d8ac9efeda00265ffdb3f91980"} Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.135535 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.137115 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.140230 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.140279 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.141443 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2ff2c3d8-2d68-4255-a175-21f0df1b9276/ovn-northd/0.log" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.141559 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.142228 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2ff2c3d8-2d68-4255-a175-21f0df1b9276","Type":"ContainerDied","Data":"710e2e791f44aa4a7534510792c8ca7893edb756d648bcd8efc2a038da9f4e30"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.161638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2c87v" event={"ID":"2acfa57e-c4e9-4809-b5cb-109f1bbb64f2","Type":"ContainerDied","Data":"24e8d59ec3c64b717babfef7f378c16dbc7782ee7d0c22d80830c614a5f49681"} Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.161891 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2c87v" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.162695 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b2af-account-create-update-852lx" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.245385 4902 scope.go:117] "RemoveContainer" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.310687 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" path="/var/lib/kubelet/pods/365d6c18-395e-4a62-939d-a04927ffa8aa/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.311688 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" path="/var/lib/kubelet/pods/3aa6f350-dd82-4d59-ac24-5460acc2a8a6/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.312636 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561efc1e-a930-440f-83b1-a75217a11f32" path="/var/lib/kubelet/pods/561efc1e-a930-440f-83b1-a75217a11f32/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.314196 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" path="/var/lib/kubelet/pods/b52494a8-ff56-449e-a274-b37eb4bad43d/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.314829 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71fc896-318c-4277-bb32-70e3424a26c9" path="/var/lib/kubelet/pods/b71fc896-318c-4277-bb32-70e3424a26c9/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.324723 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" path="/var/lib/kubelet/pods/c366100e-d2a0-4be9-965f-ef7b7ad39f78/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.326098 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4168bc0-26cf-4786-9e28-95647462c372" path="/var/lib/kubelet/pods/c4168bc0-26cf-4786-9e28-95647462c372/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.328603 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" path="/var/lib/kubelet/pods/c653ffa0-195e-4eda-8c25-cfcff2715bdf/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.330614 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4d047b-49f4-4b55-a053-081f1be632b7" path="/var/lib/kubelet/pods/db4d047b-49f4-4b55-a053-081f1be632b7/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.331839 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff41f7d4-e15a-4fc3-afd9-5d86fe05768f" path="/var/lib/kubelet/pods/ff41f7d4-e15a-4fc3-afd9-5d86fe05768f/volumes" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.334343 4902 scope.go:117] "RemoveContainer" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.338900 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748\": container with ID starting with fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748 not found: ID does not exist" containerID="fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.338962 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748"} err="failed to get container status \"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748\": rpc error: code = NotFound desc = could not find container \"fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748\": container with ID starting with fb8a16481c25f43529b07f7f8001cb9956422d1ce5439792d3d789f5d7081748 not found: ID does not exist" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.338997 4902 scope.go:117] "RemoveContainer" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339629 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339684 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339703 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339714 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2c87v"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339724 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.339768 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.343497 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f\": container with ID starting with 155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f not found: ID does not exist" containerID="155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.343535 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f"} err="failed to get container status \"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f\": rpc error: code = NotFound desc = could not find container \"155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f\": container with ID starting with 155f66b1a4bd668276fecfb3eb37d48689799d1a177fff0ba57eb5a7b284190f not found: ID does not exist" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.343556 4902 scope.go:117] "RemoveContainer" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.371310 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.375281 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b2af-account-create-update-852lx"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.391584 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.396151 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.398274 4902 scope.go:117] "RemoveContainer" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.398717 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2\": container with ID starting with c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2 not found: ID does not exist" containerID="c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.398743 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2"} err="failed to get container status \"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2\": rpc error: code = NotFound desc = could not find container \"c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2\": container with ID starting with c008d25306bb48ab7f8510af78d46198922b24f4ffc69239b6119f6af4eadba2 not found: ID does not exist" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.398765 4902 scope.go:117] "RemoveContainer" containerID="e8a096d5f6a2e59562479be65d1cff285382747948d319ddcc17f47f718069db" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.419792 4902 scope.go:117] "RemoveContainer" containerID="c4adcc4c76cce96e7677ca792f5ca78a7e382164071237e39c2950d3440922c3" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.443517 4902 scope.go:117] "RemoveContainer" containerID="507ce1ddbb5ab555237835c4f50235305caf3b3ba28a9df9ec0892a88f2b0f8f" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.457655 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.457702 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfjwj\" (UniqueName: \"kubernetes.io/projected/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9-kube-api-access-bfjwj\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.457732 4902 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:57:20 crc kubenswrapper[4902]: E0121 14:57:20.457801 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data podName:67f50f65-9151-4444-9680-f86e0f256069 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:28.457784145 +0000 UTC m=+1410.534617174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data") pod "rabbitmq-server-0" (UID: "67f50f65-9151-4444-9680-f86e0f256069") : configmap "rabbitmq-config-data" not found Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.732323 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.763903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.763953 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.763991 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764019 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764241 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764337 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764397 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764421 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764457 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.764502 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") pod \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\" (UID: \"8d7103bd-b24b-4a0c-b68a-17373307f1aa\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.766335 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.767196 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.771080 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98" (OuterVolumeSpecName: "kube-api-access-rkc98") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "kube-api-access-rkc98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.771089 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.771709 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.772852 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.772881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.774627 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info" (OuterVolumeSpecName: "pod-info") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.787106 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data" (OuterVolumeSpecName: "config-data") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.810392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf" (OuterVolumeSpecName: "server-conf") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.862156 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865776 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865888 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865967 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.865996 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.866035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.866137 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.866184 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") pod \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\" (UID: \"19a933f8-5063-4cd1-8d3d-420e82d4e1fd\") " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.868180 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.869455 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.870428 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.873992 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkc98\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-kube-api-access-rkc98\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874050 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874064 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874074 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874083 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874091 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8d7103bd-b24b-4a0c-b68a-17373307f1aa-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874100 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8d7103bd-b24b-4a0c-b68a-17373307f1aa-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874109 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874118 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8d7103bd-b24b-4a0c-b68a-17373307f1aa-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.874126 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.875271 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.877272 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.878514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8d7103bd-b24b-4a0c-b68a-17373307f1aa" (UID: "8d7103bd-b24b-4a0c-b68a-17373307f1aa"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.897188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz" (OuterVolumeSpecName: "kube-api-access-x82cz") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "kube-api-access-x82cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.906266 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.908598 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.948278 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "19a933f8-5063-4cd1-8d3d-420e82d4e1fd" (UID: "19a933f8-5063-4cd1-8d3d-420e82d4e1fd"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.974295 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975322 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975364 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975375 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975385 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x82cz\" (UniqueName: \"kubernetes.io/projected/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kube-api-access-x82cz\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975395 4902 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975402 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975411 4902 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975419 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975438 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8d7103bd-b24b-4a0c-b68a-17373307f1aa-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.975448 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/19a933f8-5063-4cd1-8d3d-420e82d4e1fd-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:20 crc kubenswrapper[4902]: I0121 14:57:20.993480 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082551 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082607 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082658 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082707 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082776 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082803 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082962 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.082989 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.083097 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") pod \"67f50f65-9151-4444-9680-f86e0f256069\" (UID: \"67f50f65-9151-4444-9680-f86e0f256069\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.083449 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.083697 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.089098 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.090260 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.090402 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.092798 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.093257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r" (OuterVolumeSpecName: "kube-api-access-sth8r") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "kube-api-access-sth8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.098782 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.104694 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info" (OuterVolumeSpecName: "pod-info") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.105940 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data" (OuterVolumeSpecName: "config-data") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.131396 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf" (OuterVolumeSpecName: "server-conf") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184831 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f50f65-9151-4444-9680-f86e0f256069-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184858 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184870 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sth8r\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-kube-api-access-sth8r\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184878 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184898 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184908 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184917 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f50f65-9151-4444-9680-f86e0f256069-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184925 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184933 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f50f65-9151-4444-9680-f86e0f256069-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.184941 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.186358 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "67f50f65-9151-4444-9680-f86e0f256069" (UID: "67f50f65-9151-4444-9680-f86e0f256069"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.189485 4902 generic.go:334] "Generic (PLEG): container finished" podID="67f50f65-9151-4444-9680-f86e0f256069" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.189607 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.190745 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerDied","Data":"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.190814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f50f65-9151-4444-9680-f86e0f256069","Type":"ContainerDied","Data":"08ee02c4a3aa1bd9f0c6f8daed756e3d6ec0c75c1f2a0da20740a10a51dd17d5"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.190833 4902 scope.go:117] "RemoveContainer" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194199 4902 generic.go:334] "Generic (PLEG): container finished" podID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194247 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerDied","Data":"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194304 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"19a933f8-5063-4cd1-8d3d-420e82d4e1fd","Type":"ContainerDied","Data":"bf2f4711a987253bd77a78040ec2bd0cf16012bd15444fb1b640251be787c875"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.194271 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.196015 4902 generic.go:334] "Generic (PLEG): container finished" podID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerID="ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.196093 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerDied","Data":"ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199524 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" exitCode=0 Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199585 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerDied","Data":"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.199695 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8d7103bd-b24b-4a0c-b68a-17373307f1aa","Type":"ContainerDied","Data":"43205feda26dd86650cc6a1b706524efcf814c15daa6ef3c2cb46d3126d049ac"} Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.245036 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.290485 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.290522 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f50f65-9151-4444-9680-f86e0f256069-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.315788 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.320588 4902 scope.go:117] "RemoveContainer" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.324291 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.348198 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.358397 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.368477 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.380533 4902 scope.go:117] "RemoveContainer" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.381193 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852\": container with ID starting with d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852 not found: ID does not exist" containerID="d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381224 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852"} err="failed to get container status \"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852\": rpc error: code = NotFound desc = could not find container \"d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852\": container with ID starting with d08cd7af47d6cf6012c6eba2ad5dc9f83cf90eb79aaa00f9f6fef153934a3852 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381247 4902 scope.go:117] "RemoveContainer" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381268 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.381715 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80\": container with ID starting with 61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80 not found: ID does not exist" containerID="61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381755 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80"} err="failed to get container status \"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80\": rpc error: code = NotFound desc = could not find container \"61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80\": container with ID starting with 61d699bd9d7518ccbc0b054c17af7f1dd18564a940361db0161ef5daafb50c80 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.381785 4902 scope.go:117] "RemoveContainer" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.420339 4902 scope.go:117] "RemoveContainer" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.446325 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.447123 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.448663 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.449930 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.449999 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.450062 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.451239 4902 scope.go:117] "RemoveContainer" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.459494 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.459586 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.459525 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9\": container with ID starting with 21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9 not found: ID does not exist" containerID="21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.459721 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9"} err="failed to get container status \"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9\": rpc error: code = NotFound desc = could not find container \"21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9\": container with ID starting with 21a1123acb107a45eb08fbc43e3fb4ed7ba2ebc1d5f54a398f068c8296c06ec9 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.459755 4902 scope.go:117] "RemoveContainer" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.460588 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659\": container with ID starting with 231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659 not found: ID does not exist" containerID="231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.460620 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659"} err="failed to get container status \"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659\": rpc error: code = NotFound desc = could not find container \"231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659\": container with ID starting with 231c6e3ba9f72e73ba62f1ebc540eb1af701534ca3cefe4ed6a2a60507ad8659 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.460639 4902 scope.go:117] "RemoveContainer" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.502985 4902 scope.go:117] "RemoveContainer" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.514666 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.528881 4902 scope.go:117] "RemoveContainer" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.533713 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70\": container with ID starting with 9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70 not found: ID does not exist" containerID="9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.533760 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70"} err="failed to get container status \"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70\": rpc error: code = NotFound desc = could not find container \"9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70\": container with ID starting with 9533d5a72c1371f39dbd7c8f8d4ad8a3100e6fc293c2edfd8a2d067e63633c70 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.533785 4902 scope.go:117] "RemoveContainer" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" Jan 21 14:57:21 crc kubenswrapper[4902]: E0121 14:57:21.534210 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54\": container with ID starting with 92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54 not found: ID does not exist" containerID="92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.534234 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54"} err="failed to get container status \"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54\": rpc error: code = NotFound desc = could not find container \"92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54\": container with ID starting with 92e5d5d244ea3ad93a7e80ee12639d5b07fffbb547e78aa8ac616bbb354c0c54 not found: ID does not exist" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604599 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604734 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604842 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604893 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.604909 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") pod \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\" (UID: \"8e00c7d5-7199-4602-9d3b-5af4f14124bc\") " Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.634578 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.637228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts" (OuterVolumeSpecName: "scripts") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.650277 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.650463 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk" (OuterVolumeSpecName: "kube-api-access-rtlrk") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "kube-api-access-rtlrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706911 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtlrk\" (UniqueName: \"kubernetes.io/projected/8e00c7d5-7199-4602-9d3b-5af4f14124bc-kube-api-access-rtlrk\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706948 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706958 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.706967 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.743008 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data" (OuterVolumeSpecName: "config-data") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.747973 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.774277 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.789324 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e00c7d5-7199-4602-9d3b-5af4f14124bc" (UID: "8e00c7d5-7199-4602-9d3b-5af4f14124bc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808018 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808357 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808451 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:21 crc kubenswrapper[4902]: I0121 14:57:21.808505 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e00c7d5-7199-4602-9d3b-5af4f14124bc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.050562 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.115522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") pod \"dbc235c8-beef-433d-b663-e1d09b6a9b65\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.115705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") pod \"dbc235c8-beef-433d-b663-e1d09b6a9b65\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.115803 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") pod \"dbc235c8-beef-433d-b663-e1d09b6a9b65\" (UID: \"dbc235c8-beef-433d-b663-e1d09b6a9b65\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.119330 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5" (OuterVolumeSpecName: "kube-api-access-8tbt5") pod "dbc235c8-beef-433d-b663-e1d09b6a9b65" (UID: "dbc235c8-beef-433d-b663-e1d09b6a9b65"). InnerVolumeSpecName "kube-api-access-8tbt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.139187 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbc235c8-beef-433d-b663-e1d09b6a9b65" (UID: "dbc235c8-beef-433d-b663-e1d09b6a9b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.142192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data" (OuterVolumeSpecName: "config-data") pod "dbc235c8-beef-433d-b663-e1d09b6a9b65" (UID: "dbc235c8-beef-433d-b663-e1d09b6a9b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.217099 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.217127 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tbt5\" (UniqueName: \"kubernetes.io/projected/dbc235c8-beef-433d-b663-e1d09b6a9b65-kube-api-access-8tbt5\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.217142 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbc235c8-beef-433d-b663-e1d09b6a9b65-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221809 4902 generic.go:334] "Generic (PLEG): container finished" podID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" exitCode=0 Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221881 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerDied","Data":"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221962 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dbc235c8-beef-433d-b663-e1d09b6a9b65","Type":"ContainerDied","Data":"b81adfeafc100f247345bb4dc1ec0bbf1a637bdabc4a363633412eb4f663c5f6"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.221986 4902 scope.go:117] "RemoveContainer" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.223078 4902 generic.go:334] "Generic (PLEG): container finished" podID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" exitCode=0 Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.223132 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerDied","Data":"a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.227635 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5684459db4-jgdkj" event={"ID":"8e00c7d5-7199-4602-9d3b-5af4f14124bc","Type":"ContainerDied","Data":"a7b81b6927c5878e4864d8eea63ac6db97be31623e53b2291bbb5d03097d4cf8"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.227649 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5684459db4-jgdkj" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.229940 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerID="51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c" exitCode=0 Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.230170 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerDied","Data":"51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.230273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7887695489-rtxbl" event={"ID":"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5","Type":"ContainerDied","Data":"b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9"} Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.230364 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ec2a7ebbd2aee467c0043661c91112bee51c7e8687af847e64a040bb7767f9" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.252553 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.257559 4902 scope.go:117] "RemoveContainer" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" Jan 21 14:57:22 crc kubenswrapper[4902]: E0121 14:57:22.257890 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae\": container with ID starting with 357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae not found: ID does not exist" containerID="357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.257917 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae"} err="failed to get container status \"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae\": rpc error: code = NotFound desc = could not find container \"357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae\": container with ID starting with 357518db97e5e8ee7e0173e1ce7359fb0b0662d5116ba83c387d48c37a5cdaae not found: ID does not exist" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.257937 4902 scope.go:117] "RemoveContainer" containerID="ea8dbb434ad9bd3e85adcd00febd132baf741c5aae1afe358fb761a39bcb889e" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.315955 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" path="/var/lib/kubelet/pods/0ea9ca5b-2e24-41de-8a99-a882ec11c222/volumes" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317556 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317606 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317647 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317672 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.317792 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") pod \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\" (UID: \"9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5\") " Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.318877 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" path="/var/lib/kubelet/pods/19a933f8-5063-4cd1-8d3d-420e82d4e1fd/volumes" Jan 21 14:57:22 crc kubenswrapper[4902]: I0121 14:57:22.320101 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" path="/var/lib/kubelet/pods/2acfa57e-c4e9-4809-b5cb-109f1bbb64f2/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.001553 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: i/o timeout" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.001977 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df595696d-2ftxp" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.088028 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.088308 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz" (OuterVolumeSpecName: "kube-api-access-hptjz") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "kube-api-access-hptjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.088968 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" path="/var/lib/kubelet/pods/2c70bcdb-316e-4246-b333-ddaf6438c6ee/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.092221 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" path="/var/lib/kubelet/pods/2ff2c3d8-2d68-4255-a175-21f0df1b9276/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.105019 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdb84d6-c599-4d87-9c27-cb32ff77d6d9" path="/var/lib/kubelet/pods/3bdb84d6-c599-4d87-9c27-cb32ff77d6d9/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.111753 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f50f65-9151-4444-9680-f86e0f256069" path="/var/lib/kubelet/pods/67f50f65-9151-4444-9680-f86e0f256069/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.117768 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" path="/var/lib/kubelet/pods/8d7103bd-b24b-4a0c-b68a-17373307f1aa/volumes" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.152942 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.152979 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hptjz\" (UniqueName: \"kubernetes.io/projected/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-kube-api-access-hptjz\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.166399 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.166438 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config" (OuterVolumeSpecName: "config") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.170564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.171251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.196814 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" (UID: "9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.244955 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7887695489-rtxbl" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254088 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254157 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254170 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254184 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.254197 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288731 4902 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc3ac42_826c_4f25_a3f7_d1ab2eb8cbf5.slice/cgroup.controllers: no such file or directory" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288795 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288818 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288832 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288843 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5684459db4-jgdkj"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288857 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"359a818e-1c34-4dfd-bb59-0e72280a85a0","Type":"ContainerDied","Data":"4ffea13c5b1ca8a19fa0ab7ab117654ce080a9b7f7c854db7559f017b9ca3c40"} Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.288878 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffea13c5b1ca8a19fa0ab7ab117654ce080a9b7f7c854db7559f017b9ca3c40" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.302199 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.310576 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.315556 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7887695489-rtxbl"] Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.354784 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.354884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.355008 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.363113 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt" (OuterVolumeSpecName: "kube-api-access-ww8pt") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0"). InnerVolumeSpecName "kube-api-access-ww8pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: E0121 14:57:23.370672 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data podName:359a818e-1c34-4dfd-bb59-0e72280a85a0 nodeName:}" failed. No retries permitted until 2026-01-21 14:57:23.870644513 +0000 UTC m=+1405.947477542 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0") : error deleting /var/lib/kubelet/pods/359a818e-1c34-4dfd-bb59-0e72280a85a0/volume-subpaths: remove /var/lib/kubelet/pods/359a818e-1c34-4dfd-bb59-0e72280a85a0/volume-subpaths: no such file or directory Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.372901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.456829 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww8pt\" (UniqueName: \"kubernetes.io/projected/359a818e-1c34-4dfd-bb59-0e72280a85a0-kube-api-access-ww8pt\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.457157 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.963819 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") pod \"359a818e-1c34-4dfd-bb59-0e72280a85a0\" (UID: \"359a818e-1c34-4dfd-bb59-0e72280a85a0\") " Jan 21 14:57:23 crc kubenswrapper[4902]: I0121 14:57:23.968626 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data" (OuterVolumeSpecName: "config-data") pod "359a818e-1c34-4dfd-bb59-0e72280a85a0" (UID: "359a818e-1c34-4dfd-bb59-0e72280a85a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.066100 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359a818e-1c34-4dfd-bb59-0e72280a85a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.251927 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.282641 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.289249 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.308949 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" path="/var/lib/kubelet/pods/359a818e-1c34-4dfd-bb59-0e72280a85a0/volumes" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.309967 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" path="/var/lib/kubelet/pods/8e00c7d5-7199-4602-9d3b-5af4f14124bc/volumes" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.311218 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" path="/var/lib/kubelet/pods/9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5/volumes" Jan 21 14:57:24 crc kubenswrapper[4902]: I0121 14:57:24.313196 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" path="/var/lib/kubelet/pods/dbc235c8-beef-433d-b663-e1d09b6a9b65/volumes" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.019443 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080243 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080305 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080381 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080433 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080470 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080560 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.080643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") pod \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\" (UID: \"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53\") " Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.082033 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.082329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.086147 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s" (OuterVolumeSpecName: "kube-api-access-r5c9s") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "kube-api-access-r5c9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.097446 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts" (OuterVolumeSpecName: "scripts") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.102280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.119653 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.146357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.162506 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data" (OuterVolumeSpecName: "config-data") pod "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" (UID: "874c6c46-dedc-4ec9-8ee5-c45ef9cddb53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182344 4902 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182381 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182390 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182400 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5c9s\" (UniqueName: \"kubernetes.io/projected/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-kube-api-access-r5c9s\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182411 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182419 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182430 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.182439 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262903 4902 generic.go:334] "Generic (PLEG): container finished" podID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" exitCode=0 Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262955 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67"} Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262982 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262997 4902 scope.go:117] "RemoveContainer" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.262985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"874c6c46-dedc-4ec9-8ee5-c45ef9cddb53","Type":"ContainerDied","Data":"c200d00278992d8d2cca7e33c912295c7207132824d0c1563c30e02dcd83a48e"} Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.297274 4902 scope.go:117] "RemoveContainer" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.303338 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.308612 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.314565 4902 scope.go:117] "RemoveContainer" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.334689 4902 scope.go:117] "RemoveContainer" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359091 4902 scope.go:117] "RemoveContainer" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.359426 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0\": container with ID starting with d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0 not found: ID does not exist" containerID="d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359454 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0"} err="failed to get container status \"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0\": rpc error: code = NotFound desc = could not find container \"d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0\": container with ID starting with d6f6a5c0a00cdeaf839297a35c3f4236035b989a246637f0676066deb3a916b0 not found: ID does not exist" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359476 4902 scope.go:117] "RemoveContainer" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.359795 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881\": container with ID starting with 91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881 not found: ID does not exist" containerID="91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359851 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881"} err="failed to get container status \"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881\": rpc error: code = NotFound desc = could not find container \"91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881\": container with ID starting with 91582d6efb1d6dbb23eab06edc79fb57a18ff0e6dec7821eb000c4ad0d18e881 not found: ID does not exist" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.359944 4902 scope.go:117] "RemoveContainer" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.360278 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67\": container with ID starting with c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67 not found: ID does not exist" containerID="c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.360312 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67"} err="failed to get container status \"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67\": rpc error: code = NotFound desc = could not find container \"c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67\": container with ID starting with c2a4cbdb4c981ccfa8da8a93cd66458573521003381a01e3bfdbf3ed241a8b67 not found: ID does not exist" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.360331 4902 scope.go:117] "RemoveContainer" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" Jan 21 14:57:25 crc kubenswrapper[4902]: E0121 14:57:25.360587 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110\": container with ID starting with 49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110 not found: ID does not exist" containerID="49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110" Jan 21 14:57:25 crc kubenswrapper[4902]: I0121 14:57:25.360608 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110"} err="failed to get container status \"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110\": rpc error: code = NotFound desc = could not find container \"49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110\": container with ID starting with 49dbcea536eee6efded3103e0667ccff38e7917d7173534a6d8a56f77f69b110 not found: ID does not exist" Jan 21 14:57:26 crc kubenswrapper[4902]: I0121 14:57:26.304270 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" path="/var/lib/kubelet/pods/874c6c46-dedc-4ec9-8ee5-c45ef9cddb53/volumes" Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443152 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443590 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443883 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.443924 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.444361 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.446262 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.449918 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:26 crc kubenswrapper[4902]: E0121 14:57:26.450007 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.442518 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.443386 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.443855 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.443891 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.444000 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.445035 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.446036 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:31 crc kubenswrapper[4902]: E0121 14:57:31.446085 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.442539 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.444233 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.444277 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.444974 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.445059 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.446526 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.447925 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:36 crc kubenswrapper[4902]: E0121 14:57:36.447984 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.442467 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444225 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444318 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444809 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.444885 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.446436 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.450371 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:57:41 crc kubenswrapper[4902]: E0121 14:57:41.450528 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-4sm9h" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.089738 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166545 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166699 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.166812 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") pod \"ee214fec-083a-4abd-b65e-003bccee24fa\" (UID: \"ee214fec-083a-4abd-b65e-003bccee24fa\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167435 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock" (OuterVolumeSpecName: "lock") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167562 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache" (OuterVolumeSpecName: "cache") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167821 4902 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-cache\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.167843 4902 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ee214fec-083a-4abd-b65e-003bccee24fa-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.174459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.188661 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq" (OuterVolumeSpecName: "kube-api-access-hqvxq") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "kube-api-access-hqvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.197228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ee214fec-083a-4abd-b65e-003bccee24fa" (UID: "ee214fec-083a-4abd-b65e-003bccee24fa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.268968 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.269026 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.269036 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvxq\" (UniqueName: \"kubernetes.io/projected/ee214fec-083a-4abd-b65e-003bccee24fa-kube-api-access-hqvxq\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.289329 4902 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.345291 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4sm9h_bfa512c9-b91a-4a30-8a23-548ef53b094e/ovs-vswitchd/0.log" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.346500 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.370169 4902 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440414 4902 generic.go:334] "Generic (PLEG): container finished" podID="ee214fec-083a-4abd-b65e-003bccee24fa" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" exitCode=137 Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440719 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ee214fec-083a-4abd-b65e-003bccee24fa","Type":"ContainerDied","Data":"6c463f82994bcd8248458f35757eded9002826e57bff7f1770ee0560e5c7ce9d"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440537 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.440771 4902 scope.go:117] "RemoveContainer" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.450919 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4sm9h_bfa512c9-b91a-4a30-8a23-548ef53b094e/ovs-vswitchd/0.log" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.467065 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4sm9h" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.467444 4902 scope.go:117] "RemoveContainer" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.468005 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" exitCode=137 Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.468307 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.468389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4sm9h" event={"ID":"bfa512c9-b91a-4a30-8a23-548ef53b094e","Type":"ContainerDied","Data":"802447b9b93240937e871b9f5fd717abb6508a7f8537087545c7900d7f4a54d8"} Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470684 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470768 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470868 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470906 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470930 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.470983 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") pod \"bfa512c9-b91a-4a30-8a23-548ef53b094e\" (UID: \"bfa512c9-b91a-4a30-8a23-548ef53b094e\") " Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471359 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run" (OuterVolumeSpecName: "var-run") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471399 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log" (OuterVolumeSpecName: "var-log") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.471405 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib" (OuterVolumeSpecName: "var-lib") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.474436 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts" (OuterVolumeSpecName: "scripts") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.480399 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn" (OuterVolumeSpecName: "kube-api-access-488bn") pod "bfa512c9-b91a-4a30-8a23-548ef53b094e" (UID: "bfa512c9-b91a-4a30-8a23-548ef53b094e"). InnerVolumeSpecName "kube-api-access-488bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.491707 4902 scope.go:117] "RemoveContainer" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.509306 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.514221 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.520591 4902 scope.go:117] "RemoveContainer" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.538481 4902 scope.go:117] "RemoveContainer" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.556011 4902 scope.go:117] "RemoveContainer" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572540 4902 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572570 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa512c9-b91a-4a30-8a23-548ef53b094e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572582 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488bn\" (UniqueName: \"kubernetes.io/projected/bfa512c9-b91a-4a30-8a23-548ef53b094e-kube-api-access-488bn\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572594 4902 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572604 4902 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.572614 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa512c9-b91a-4a30-8a23-548ef53b094e-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.573203 4902 scope.go:117] "RemoveContainer" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.587059 4902 scope.go:117] "RemoveContainer" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.605230 4902 scope.go:117] "RemoveContainer" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.625287 4902 scope.go:117] "RemoveContainer" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.643297 4902 scope.go:117] "RemoveContainer" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.659724 4902 scope.go:117] "RemoveContainer" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.679380 4902 scope.go:117] "RemoveContainer" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.695008 4902 scope.go:117] "RemoveContainer" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.710287 4902 scope.go:117] "RemoveContainer" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.731445 4902 scope.go:117] "RemoveContainer" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.731972 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc\": container with ID starting with 71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc not found: ID does not exist" containerID="71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732016 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc"} err="failed to get container status \"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc\": rpc error: code = NotFound desc = could not find container \"71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc\": container with ID starting with 71c0d832813defb527aa1814f77c62c31a9b901d6374122ce45b4dae5af8b2fc not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732070 4902 scope.go:117] "RemoveContainer" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.732465 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f\": container with ID starting with a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f not found: ID does not exist" containerID="a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732494 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f"} err="failed to get container status \"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f\": rpc error: code = NotFound desc = could not find container \"a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f\": container with ID starting with a30d2bcf70ef847e08dbc9d9224aa7503e20b62010662ae727cb980a6ab4c74f not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.732613 4902 scope.go:117] "RemoveContainer" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.733032 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e\": container with ID starting with 589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e not found: ID does not exist" containerID="589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733175 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e"} err="failed to get container status \"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e\": rpc error: code = NotFound desc = could not find container \"589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e\": container with ID starting with 589c8e987378518c50a4239aab22f37669b8fce024b489fd25d649386ced3d1e not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733290 4902 scope.go:117] "RemoveContainer" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.733672 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179\": container with ID starting with 6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179 not found: ID does not exist" containerID="6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733694 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179"} err="failed to get container status \"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179\": rpc error: code = NotFound desc = could not find container \"6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179\": container with ID starting with 6320021372f52db2a539bf2f519f63977f7b01d5d4c96c9c7ae2dce0f186a179 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.733708 4902 scope.go:117] "RemoveContainer" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.734186 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a\": container with ID starting with 0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a not found: ID does not exist" containerID="0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734218 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a"} err="failed to get container status \"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a\": rpc error: code = NotFound desc = could not find container \"0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a\": container with ID starting with 0d9b2680755b51525f4eef751aa1e463e06f9df7d76bed29e873a6352135fd2a not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734259 4902 scope.go:117] "RemoveContainer" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.734543 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9\": container with ID starting with a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9 not found: ID does not exist" containerID="a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734568 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9"} err="failed to get container status \"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9\": rpc error: code = NotFound desc = could not find container \"a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9\": container with ID starting with a309b5d736fadd584b07d61743bc1087795277967d3048633d130a54e7a0a2f9 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.734583 4902 scope.go:117] "RemoveContainer" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.735125 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f\": container with ID starting with fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f not found: ID does not exist" containerID="fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735144 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f"} err="failed to get container status \"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f\": rpc error: code = NotFound desc = could not find container \"fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f\": container with ID starting with fa808074dff822c166bf8fbcd6c7f007cf2c658fab6e0ab1a35ba153e92dde3f not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735159 4902 scope.go:117] "RemoveContainer" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.735690 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e\": container with ID starting with eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e not found: ID does not exist" containerID="eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735710 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e"} err="failed to get container status \"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e\": rpc error: code = NotFound desc = could not find container \"eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e\": container with ID starting with eda379b9ee52d7ac2e7ec591e2dc3b95be7ee3d18055fddfcf3542e57ca1c98e not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.735723 4902 scope.go:117] "RemoveContainer" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.736405 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606\": container with ID starting with 756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606 not found: ID does not exist" containerID="756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736424 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606"} err="failed to get container status \"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606\": rpc error: code = NotFound desc = could not find container \"756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606\": container with ID starting with 756ce3a3f953d2fd237335e016ee529b00d408161cf888a9b4f665730ccb1606 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736439 4902 scope.go:117] "RemoveContainer" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.736925 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a\": container with ID starting with c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a not found: ID does not exist" containerID="c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736970 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a"} err="failed to get container status \"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a\": rpc error: code = NotFound desc = could not find container \"c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a\": container with ID starting with c8efbdf8e83a0ee4a69196ad4fbc44c92f3de02c3f0e80339fbe5fa1e52e264a not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.736989 4902 scope.go:117] "RemoveContainer" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.737460 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157\": container with ID starting with df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157 not found: ID does not exist" containerID="df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737482 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157"} err="failed to get container status \"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157\": rpc error: code = NotFound desc = could not find container \"df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157\": container with ID starting with df9a4478488edfff2376fad946a9f0ad42be5e91f76c876975aeed8f8b54f157 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737590 4902 scope.go:117] "RemoveContainer" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.737929 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad\": container with ID starting with b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad not found: ID does not exist" containerID="b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737976 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad"} err="failed to get container status \"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad\": rpc error: code = NotFound desc = could not find container \"b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad\": container with ID starting with b2819cc10b0afdb7b82c8bf672e2c597bd8c51d4aca3985269338de5040ceaad not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.737993 4902 scope.go:117] "RemoveContainer" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.738457 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135\": container with ID starting with 723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135 not found: ID does not exist" containerID="723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738483 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135"} err="failed to get container status \"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135\": rpc error: code = NotFound desc = could not find container \"723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135\": container with ID starting with 723bd124384ff73a91f7462d52a69b0fca836ee5b022ed9c57d08f05cda53135 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738500 4902 scope.go:117] "RemoveContainer" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.738824 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5\": container with ID starting with ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5 not found: ID does not exist" containerID="ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738870 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5"} err="failed to get container status \"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5\": rpc error: code = NotFound desc = could not find container \"ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5\": container with ID starting with ca026a64e5ed0440bb4053384aebafe4bc62e459d3afe1a25e2aea263dbc89a5 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.738887 4902 scope.go:117] "RemoveContainer" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.739248 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b\": container with ID starting with 69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b not found: ID does not exist" containerID="69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.739270 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b"} err="failed to get container status \"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b\": rpc error: code = NotFound desc = could not find container \"69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b\": container with ID starting with 69d6b1f3d4fc49552bd33a29a238e43f674dc6454a4547cada3ad48ce517de8b not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.739286 4902 scope.go:117] "RemoveContainer" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.760895 4902 scope.go:117] "RemoveContainer" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.777680 4902 scope.go:117] "RemoveContainer" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.797508 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.807465 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-4sm9h"] Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.815022 4902 scope.go:117] "RemoveContainer" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.815655 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1\": container with ID starting with 0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1 not found: ID does not exist" containerID="0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.815716 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1"} err="failed to get container status \"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1\": rpc error: code = NotFound desc = could not find container \"0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1\": container with ID starting with 0d6ffec0976a2a2fc4f2e8e6c7dd5c7cd95a1c099341bbbb14ea4a9cbcb3ccc1 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.815748 4902 scope.go:117] "RemoveContainer" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.816168 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8\": container with ID starting with df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 not found: ID does not exist" containerID="df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.816247 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8"} err="failed to get container status \"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8\": rpc error: code = NotFound desc = could not find container \"df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8\": container with ID starting with df1a43a608ca801d6e100403bd0976b471facbbd2adaf9df07d59aa0c6e69af8 not found: ID does not exist" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.816355 4902 scope.go:117] "RemoveContainer" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" Jan 21 14:57:43 crc kubenswrapper[4902]: E0121 14:57:43.816663 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb\": container with ID starting with e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb not found: ID does not exist" containerID="e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb" Jan 21 14:57:43 crc kubenswrapper[4902]: I0121 14:57:43.816725 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb"} err="failed to get container status \"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb\": rpc error: code = NotFound desc = could not find container \"e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb\": container with ID starting with e34b32829dca6d57eb471e674b23b27f3959ee5f87b800836461d13aa9a37bcb not found: ID does not exist" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.306897 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" path="/var/lib/kubelet/pods/bfa512c9-b91a-4a30-8a23-548ef53b094e/volumes" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.308490 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" path="/var/lib/kubelet/pods/ee214fec-083a-4abd-b65e-003bccee24fa/volumes" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.377878 4902 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5ef26f87_2d73_4847_abfb_a3bbda8c01c6.slice" Jan 21 14:57:44 crc kubenswrapper[4902]: E0121 14:57:44.378183 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5ef26f87-2d73-4847-abfb-a3bbda8c01c6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5ef26f87_2d73_4847_abfb_a3bbda8c01c6.slice" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.397917 4902 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8891f80f-6cb0-4dc6-9f92-836d465e1c84"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8891f80f-6cb0-4dc6-9f92-836d465e1c84] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8891f80f_6cb0_4dc6_9f92_836d465e1c84.slice" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.419176 4902 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode8135258-f03d-4c9a-be6f-7dd1dd099188"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode8135258-f03d-4c9a-be6f-7dd1dd099188] : Timed out while waiting for systemd to remove kubepods-besteffort-pode8135258_f03d_4c9a_be6f_7dd1dd099188.slice" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.482878 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-gzrwg" Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.539214 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:57:44 crc kubenswrapper[4902]: I0121 14:57:44.546536 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-gzrwg"] Jan 21 14:57:46 crc kubenswrapper[4902]: I0121 14:57:46.312253 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef26f87-2d73-4847-abfb-a3bbda8c01c6" path="/var/lib/kubelet/pods/5ef26f87-2d73-4847-abfb-a3bbda8c01c6/volumes" Jan 21 14:57:47 crc kubenswrapper[4902]: I0121 14:57:47.770095 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:57:47 crc kubenswrapper[4902]: I0121 14:57:47.770164 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:58:17 crc kubenswrapper[4902]: I0121 14:58:17.770016 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:58:17 crc kubenswrapper[4902]: I0121 14:58:17.770566 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.026768 4902 scope.go:117] "RemoveContainer" containerID="4de70b4a162bef7d46289abd4a1b9363b5ded88ef279f8bfde6f5eb04e8068c8" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.064821 4902 scope.go:117] "RemoveContainer" containerID="1e365c417d7c9fc9f0e3c50b8df2956ab629924185f3c066a501456bc7f2f244" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.088316 4902 scope.go:117] "RemoveContainer" containerID="04e51686a115d7efa7ccafee00c3c35f348877ed4159bb02ef8fdec725c74808" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.115120 4902 scope.go:117] "RemoveContainer" containerID="1bfce2ecde4206400633bc9ed5a03f89132046bc198571a9ea9d8cdbe7e9aafa" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.150854 4902 scope.go:117] "RemoveContainer" containerID="fa5cddac767f0cfa37e86e0452a0e4172f930485b3055e92e46247cd7dffa247" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.168173 4902 scope.go:117] "RemoveContainer" containerID="f6b39c880fbd40f2782ed02884cfa856d1ecf3dfd90d97c9787d318a34cf7495" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.196951 4902 scope.go:117] "RemoveContainer" containerID="fabbe3c5e36565bf6c2514be460d8e197d15c7ef2a2eaad51eaaf9fc51cd6931" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.219475 4902 scope.go:117] "RemoveContainer" containerID="7d4422a73cd9c69151e982d6a24415a420632cf5387be9a9908b89fae4b7d136" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.243333 4902 scope.go:117] "RemoveContainer" containerID="2e960884dfc54470df60f875a779cf61caa394a9b9eb4b58037a649720bdac73" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.274227 4902 scope.go:117] "RemoveContainer" containerID="f8e614c23f60db2d2289c45f03de6ca360a2d28723c52bf7d5442f33e4ef3cb9" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.297791 4902 scope.go:117] "RemoveContainer" containerID="0db12f9364007deb6067c2c445b04573d37703a8a3c7073268d343c3233327a1" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.323814 4902 scope.go:117] "RemoveContainer" containerID="29527624e52b61188971d77dcdc19feadc4e519866ced3ad0c73f26335294506" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.348653 4902 scope.go:117] "RemoveContainer" containerID="9f4ace11ba250ec7523d0e7ac4b0965a74da40d284c328763aa454874b0606f8" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.369423 4902 scope.go:117] "RemoveContainer" containerID="6843f7fdaa415e7e2f0347cd97fdaa8f7eaf2a1c6b75202daa5f85889752389a" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.390376 4902 scope.go:117] "RemoveContainer" containerID="9c7eb232194bf5acf0b72c5e4e2b10f32410c50f4767d8979981cf5af8e7ed7d" Jan 21 14:58:19 crc kubenswrapper[4902]: I0121 14:58:19.414817 4902 scope.go:117] "RemoveContainer" containerID="183c9aacc3759e23732dbe091d0a8125502d61ad06cbf81f3beb450ef89e7614" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.769878 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.770507 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.770558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.771390 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:58:47 crc kubenswrapper[4902]: I0121 14:58:47.771478 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61" gracePeriod=600 Jan 21 14:58:48 crc kubenswrapper[4902]: I0121 14:58:48.014992 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61" exitCode=0 Jan 21 14:58:48 crc kubenswrapper[4902]: I0121 14:58:48.015073 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61"} Jan 21 14:58:48 crc kubenswrapper[4902]: I0121 14:58:48.015354 4902 scope.go:117] "RemoveContainer" containerID="0203ec0a15ee1aa92f4eb3d8e44c0e52d1043afb244cf40caae4761f1f1ee369" Jan 21 14:58:49 crc kubenswrapper[4902]: I0121 14:58:49.026420 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110"} Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.492442 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493787 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493820 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493871 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493888 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493919 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493935 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493950 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.493964 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.493989 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494005 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494032 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494087 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494109 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494127 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494142 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494156 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494178 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494196 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494217 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494231 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494257 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494273 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494299 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494315 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494338 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494356 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494405 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494423 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494450 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494466 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494490 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494505 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494536 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494553 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494573 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494589 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494616 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494666 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494684 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494700 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494721 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494736 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494757 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494773 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494800 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server-init" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494815 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server-init" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494836 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.494853 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.494884 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502680 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502784 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502804 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502827 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502847 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502883 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="mysql-bootstrap" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502902 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="mysql-bootstrap" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502931 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502946 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.502974 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.502989 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503019 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503082 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503097 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503117 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503136 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503162 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503179 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="setup-container" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503209 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503224 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503262 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerName="mariadb-account-create-update" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503284 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerName="mariadb-account-create-update" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503313 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503329 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503353 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503369 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503395 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503477 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503496 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503523 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503542 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503576 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503594 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503614 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503649 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503665 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503692 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503709 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503732 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503769 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503791 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503809 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503832 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503848 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503868 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503884 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" Jan 21 14:59:19 crc kubenswrapper[4902]: E0121 14:59:19.503912 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.503932 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504392 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504423 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="swift-recon-cron" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504457 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c366100e-d2a0-4be9-965f-ef7b7ad39f78" containerName="nova-scheduler-scheduler" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504489 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504519 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504574 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovsdb-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504596 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504615 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e00c7d5-7199-4602-9d3b-5af4f14124bc" containerName="keystone-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504646 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7103bd-b24b-4a0c-b68a-17373307f1aa" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504669 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504695 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc235c8-beef-433d-b663-e1d09b6a9b65" containerName="nova-cell1-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504718 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504740 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504766 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-server" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504784 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="ovn-northd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504810 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504829 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52494a8-ff56-449e-a274-b37eb4bad43d" containerName="kube-state-metrics" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504847 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504874 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="sg-core" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504901 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504931 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-central-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504953 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="proxy-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504972 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.504990 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-updater" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505063 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa6f350-dd82-4d59-ac24-5460acc2a8a6" containerName="nova-metadata-metadata" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505091 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-reaper" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505111 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-expirer" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505134 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a933f8-5063-4cd1-8d3d-420e82d4e1fd" containerName="galera" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505156 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff2c3d8-2d68-4255-a175-21f0df1b9276" containerName="openstack-network-exporter" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505185 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="rsync" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505204 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-replicator" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505223 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c653ffa0-195e-4eda-8c25-cfcff2715bdf" containerName="barbican-worker" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505248 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="object-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505266 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505288 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="874c6c46-dedc-4ec9-8ee5-c45ef9cddb53" containerName="ceilometer-notification-agent" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505317 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea9ca5b-2e24-41de-8a99-a882ec11c222" containerName="nova-api-api" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505335 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acfa57e-c4e9-4809-b5cb-109f1bbb64f2" containerName="mariadb-account-create-update" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505356 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c70bcdb-316e-4246-b333-ddaf6438c6ee" containerName="memcached" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505375 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4168bc0-26cf-4786-9e28-95647462c372" containerName="glance-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505397 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="365d6c18-395e-4a62-939d-a04927ffa8aa" containerName="barbican-keystone-listener" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505415 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="359a818e-1c34-4dfd-bb59-0e72280a85a0" containerName="nova-cell0-conductor-conductor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505433 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc3ac42-826c-4f25-a3f7-d1ab2eb8cbf5" containerName="neutron-httpd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505451 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="container-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505469 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa512c9-b91a-4a30-8a23-548ef53b094e" containerName="ovs-vswitchd" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505489 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="561efc1e-a930-440f-83b1-a75217a11f32" containerName="barbican-api-log" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505507 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f50f65-9151-4444-9680-f86e0f256069" containerName="rabbitmq" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.505526 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee214fec-083a-4abd-b65e-003bccee24fa" containerName="account-auditor" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.507814 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.510818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.635378 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.635449 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.635467 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.736768 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.736911 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.736940 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.737596 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.737992 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.755111 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"redhat-marketplace-tw2g7\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.838259 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.886187 4902 scope.go:117] "RemoveContainer" containerID="7692fd62f5f8d970ca1dd253fc5c7512cbe9da4bdb84caf7d56a5669f3d8f303" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.922219 4902 scope.go:117] "RemoveContainer" containerID="276b271b02ab000b334b001c5253fa10542fc6c000e67438f4ac84d47645e83c" Jan 21 14:59:19 crc kubenswrapper[4902]: I0121 14:59:19.976914 4902 scope.go:117] "RemoveContainer" containerID="8d87c9d3ad1eb4e5b4f658d4e0a489d56cdaee9fc570202fc41afc9916f4ea6a" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.017637 4902 scope.go:117] "RemoveContainer" containerID="c05aa038a30ca68cb9b9875b1713755a7a748b30cae2fd412e457a921170733c" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.085293 4902 scope.go:117] "RemoveContainer" containerID="2afbcb861df82627e26ab173626f1c8e32c7418b9f0cebb9c30b8e8a773fee20" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.110508 4902 scope.go:117] "RemoveContainer" containerID="885834645f14556231a3e7a784298540883e5e957ef165eb89b1d865e26a97ac" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.138465 4902 scope.go:117] "RemoveContainer" containerID="b979d6e79dba97b3f526cfab4506aea68e0143adfc4356de611547f4493bec9f" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.156147 4902 scope.go:117] "RemoveContainer" containerID="4cc1203e814fc62d40f33869f21d58884c995a732338ba7c6403b666fa8b712d" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.172016 4902 scope.go:117] "RemoveContainer" containerID="e09162a3ec37680929590914b38193023c428285227f1464b2740e369fca6b12" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.187063 4902 scope.go:117] "RemoveContainer" containerID="689584950e8fe70d3a520e19880e648a9cfc4e1dba5d9cf1c7c92f94555adda3" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.204277 4902 scope.go:117] "RemoveContainer" containerID="cff876825001ee2c7fa7f8bdbe379da8527d1a33b467f10b305adc0a8747aa98" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.228172 4902 scope.go:117] "RemoveContainer" containerID="3d6ff1e2aa4c6d25b1afbc1c6226ab9dd8acd5472135388cee9dad4beae1dc39" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.254416 4902 scope.go:117] "RemoveContainer" containerID="493b2b2d4384ed074008865724712fb9ff226fa56a68d6f6b8711c1447a2d13b" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.291396 4902 scope.go:117] "RemoveContainer" containerID="d68914c4c8e15dba0295d1fd9bb40d5fc60aa1162bc79ce24523d135a247b33e" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.320182 4902 scope.go:117] "RemoveContainer" containerID="277691b4cd995bb05532afffdba1de6a3149dc7dc1e0f0e9ce9ba32058b05cf6" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.336187 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.349325 4902 scope.go:117] "RemoveContainer" containerID="af472f5b3bb9010ffaa61382ab0352d28b368f2e713ea44d92c653fb5e095055" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.379180 4902 scope.go:117] "RemoveContainer" containerID="dff0b2c9f0b06182d720253d8f2ef15a7b10dcf34cc35665586623d88b252d47" Jan 21 14:59:20 crc kubenswrapper[4902]: I0121 14:59:20.414270 4902 scope.go:117] "RemoveContainer" containerID="b1d80a37b9ccbfaf4f2535ef16320e6b2227313b028f8ea36eaf1aa897c3fa62" Jan 21 14:59:21 crc kubenswrapper[4902]: I0121 14:59:21.356092 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerID="2cde6e75f7222b067d6b79b31a7ebe5313dd71bd0e2b68973e655c5cf6f0a600" exitCode=0 Jan 21 14:59:21 crc kubenswrapper[4902]: I0121 14:59:21.356157 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"2cde6e75f7222b067d6b79b31a7ebe5313dd71bd0e2b68973e655c5cf6f0a600"} Jan 21 14:59:21 crc kubenswrapper[4902]: I0121 14:59:21.357975 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerStarted","Data":"7aedcda0876399885a06ed84fcbe0f07fa12b56336f8d179ea7b38ba4421ef37"} Jan 21 14:59:22 crc kubenswrapper[4902]: I0121 14:59:22.370455 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerID="af29f10ace20181a510bff8176fb59c730f1f35a22ee04c32fe59d5d86239e27" exitCode=0 Jan 21 14:59:22 crc kubenswrapper[4902]: I0121 14:59:22.370530 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"af29f10ace20181a510bff8176fb59c730f1f35a22ee04c32fe59d5d86239e27"} Jan 21 14:59:23 crc kubenswrapper[4902]: I0121 14:59:23.380201 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerStarted","Data":"3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef"} Jan 21 14:59:23 crc kubenswrapper[4902]: I0121 14:59:23.404804 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tw2g7" podStartSLOduration=2.992041671 podStartE2EDuration="4.404780313s" podCreationTimestamp="2026-01-21 14:59:19 +0000 UTC" firstStartedPulling="2026-01-21 14:59:21.357701026 +0000 UTC m=+1523.434534055" lastFinishedPulling="2026-01-21 14:59:22.770439628 +0000 UTC m=+1524.847272697" observedRunningTime="2026-01-21 14:59:23.398075164 +0000 UTC m=+1525.474908203" watchObservedRunningTime="2026-01-21 14:59:23.404780313 +0000 UTC m=+1525.481613362" Jan 21 14:59:29 crc kubenswrapper[4902]: I0121 14:59:29.839188 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:29 crc kubenswrapper[4902]: I0121 14:59:29.841096 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:29 crc kubenswrapper[4902]: I0121 14:59:29.885019 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:30 crc kubenswrapper[4902]: I0121 14:59:30.478720 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:30 crc kubenswrapper[4902]: I0121 14:59:30.532349 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:32 crc kubenswrapper[4902]: I0121 14:59:32.451552 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tw2g7" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" containerID="cri-o://3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef" gracePeriod=2 Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461437 4902 generic.go:334] "Generic (PLEG): container finished" podID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerID="3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef" exitCode=0 Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef"} Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461775 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw2g7" event={"ID":"bfc1e018-89a5-4a48-8dc9-6230711c4c49","Type":"ContainerDied","Data":"7aedcda0876399885a06ed84fcbe0f07fa12b56336f8d179ea7b38ba4421ef37"} Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.461787 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aedcda0876399885a06ed84fcbe0f07fa12b56336f8d179ea7b38ba4421ef37" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.502687 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.656355 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") pod \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.656424 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") pod \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.656493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") pod \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\" (UID: \"bfc1e018-89a5-4a48-8dc9-6230711c4c49\") " Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.658332 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities" (OuterVolumeSpecName: "utilities") pod "bfc1e018-89a5-4a48-8dc9-6230711c4c49" (UID: "bfc1e018-89a5-4a48-8dc9-6230711c4c49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.671280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl" (OuterVolumeSpecName: "kube-api-access-qrvjl") pod "bfc1e018-89a5-4a48-8dc9-6230711c4c49" (UID: "bfc1e018-89a5-4a48-8dc9-6230711c4c49"). InnerVolumeSpecName "kube-api-access-qrvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.703673 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfc1e018-89a5-4a48-8dc9-6230711c4c49" (UID: "bfc1e018-89a5-4a48-8dc9-6230711c4c49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.759240 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrvjl\" (UniqueName: \"kubernetes.io/projected/bfc1e018-89a5-4a48-8dc9-6230711c4c49-kube-api-access-qrvjl\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.759301 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:33 crc kubenswrapper[4902]: I0121 14:59:33.759319 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc1e018-89a5-4a48-8dc9-6230711c4c49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:34 crc kubenswrapper[4902]: I0121 14:59:34.471240 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw2g7" Jan 21 14:59:34 crc kubenswrapper[4902]: I0121 14:59:34.497717 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:34 crc kubenswrapper[4902]: I0121 14:59:34.503829 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw2g7"] Jan 21 14:59:36 crc kubenswrapper[4902]: I0121 14:59:36.304155 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" path="/var/lib/kubelet/pods/bfc1e018-89a5-4a48-8dc9-6230711c4c49/volumes" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127093 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:45 crc kubenswrapper[4902]: E0121 14:59:45.127869 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-content" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127885 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-content" Jan 21 14:59:45 crc kubenswrapper[4902]: E0121 14:59:45.127898 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127906 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" Jan 21 14:59:45 crc kubenswrapper[4902]: E0121 14:59:45.127920 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-utilities" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.127930 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="extract-utilities" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.128180 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc1e018-89a5-4a48-8dc9-6230711c4c49" containerName="registry-server" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.129436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.146367 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.217696 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.217820 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.217846 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.319621 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.319670 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.319761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.320318 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.320429 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.341478 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"community-operators-6mpxw\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.459112 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:45 crc kubenswrapper[4902]: I0121 14:59:45.949179 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:46 crc kubenswrapper[4902]: I0121 14:59:46.560761 4902 generic.go:334] "Generic (PLEG): container finished" podID="cae8d234-1e79-4509-be2f-286368c7e394" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" exitCode=0 Jan 21 14:59:46 crc kubenswrapper[4902]: I0121 14:59:46.560882 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f"} Jan 21 14:59:46 crc kubenswrapper[4902]: I0121 14:59:46.561054 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerStarted","Data":"1b25494bfab6f21f8caa559335efe4ed7881aad4a905f3a2da79ccb3ba3a2b88"} Jan 21 14:59:47 crc kubenswrapper[4902]: I0121 14:59:47.569942 4902 generic.go:334] "Generic (PLEG): container finished" podID="cae8d234-1e79-4509-be2f-286368c7e394" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" exitCode=0 Jan 21 14:59:47 crc kubenswrapper[4902]: I0121 14:59:47.570087 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961"} Jan 21 14:59:48 crc kubenswrapper[4902]: I0121 14:59:48.579753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerStarted","Data":"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171"} Jan 21 14:59:48 crc kubenswrapper[4902]: I0121 14:59:48.597845 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mpxw" podStartSLOduration=1.8876132380000001 podStartE2EDuration="3.597830579s" podCreationTimestamp="2026-01-21 14:59:45 +0000 UTC" firstStartedPulling="2026-01-21 14:59:46.563876461 +0000 UTC m=+1548.640709490" lastFinishedPulling="2026-01-21 14:59:48.274093802 +0000 UTC m=+1550.350926831" observedRunningTime="2026-01-21 14:59:48.595415201 +0000 UTC m=+1550.672248240" watchObservedRunningTime="2026-01-21 14:59:48.597830579 +0000 UTC m=+1550.674663608" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.459219 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.459729 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.497892 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.695705 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:55 crc kubenswrapper[4902]: I0121 14:59:55.740482 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:57 crc kubenswrapper[4902]: I0121 14:59:57.658261 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6mpxw" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" containerID="cri-o://fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" gracePeriod=2 Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.584626 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667656 4902 generic.go:334] "Generic (PLEG): container finished" podID="cae8d234-1e79-4509-be2f-286368c7e394" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" exitCode=0 Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171"} Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667728 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mpxw" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667748 4902 scope.go:117] "RemoveContainer" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.667735 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mpxw" event={"ID":"cae8d234-1e79-4509-be2f-286368c7e394","Type":"ContainerDied","Data":"1b25494bfab6f21f8caa559335efe4ed7881aad4a905f3a2da79ccb3ba3a2b88"} Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.687025 4902 scope.go:117] "RemoveContainer" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.704110 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") pod \"cae8d234-1e79-4509-be2f-286368c7e394\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.704195 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") pod \"cae8d234-1e79-4509-be2f-286368c7e394\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.704317 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") pod \"cae8d234-1e79-4509-be2f-286368c7e394\" (UID: \"cae8d234-1e79-4509-be2f-286368c7e394\") " Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.705002 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities" (OuterVolumeSpecName: "utilities") pod "cae8d234-1e79-4509-be2f-286368c7e394" (UID: "cae8d234-1e79-4509-be2f-286368c7e394"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.707527 4902 scope.go:117] "RemoveContainer" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.712299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m" (OuterVolumeSpecName: "kube-api-access-rvx7m") pod "cae8d234-1e79-4509-be2f-286368c7e394" (UID: "cae8d234-1e79-4509-be2f-286368c7e394"). InnerVolumeSpecName "kube-api-access-rvx7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.755097 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cae8d234-1e79-4509-be2f-286368c7e394" (UID: "cae8d234-1e79-4509-be2f-286368c7e394"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.755603 4902 scope.go:117] "RemoveContainer" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" Jan 21 14:59:58 crc kubenswrapper[4902]: E0121 14:59:58.756235 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171\": container with ID starting with fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171 not found: ID does not exist" containerID="fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756263 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171"} err="failed to get container status \"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171\": rpc error: code = NotFound desc = could not find container \"fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171\": container with ID starting with fc31fbb8289e14d66ad68cc8bcf460165b2958123eedf2803e0b0bd0b9821171 not found: ID does not exist" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756285 4902 scope.go:117] "RemoveContainer" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" Jan 21 14:59:58 crc kubenswrapper[4902]: E0121 14:59:58.756696 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961\": container with ID starting with 8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961 not found: ID does not exist" containerID="8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756741 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961"} err="failed to get container status \"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961\": rpc error: code = NotFound desc = could not find container \"8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961\": container with ID starting with 8c3b49ac9b6344761005c36f2d673b0b9ee35d2d9496bfc6071082d0f99ae961 not found: ID does not exist" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.756796 4902 scope.go:117] "RemoveContainer" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" Jan 21 14:59:58 crc kubenswrapper[4902]: E0121 14:59:58.757165 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f\": container with ID starting with 784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f not found: ID does not exist" containerID="784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.757239 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f"} err="failed to get container status \"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f\": rpc error: code = NotFound desc = could not find container \"784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f\": container with ID starting with 784ba278c492e2b79e7c4f0a249e38eb1d464723708a8acd99a97cecaa12256f not found: ID does not exist" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.806232 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvx7m\" (UniqueName: \"kubernetes.io/projected/cae8d234-1e79-4509-be2f-286368c7e394-kube-api-access-rvx7m\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.806267 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.806277 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae8d234-1e79-4509-be2f-286368c7e394-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.993986 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 14:59:58 crc kubenswrapper[4902]: I0121 14:59:58.999307 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6mpxw"] Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161061 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:00:00 crc kubenswrapper[4902]: E0121 15:00:00.161719 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-utilities" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161737 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-utilities" Jan 21 15:00:00 crc kubenswrapper[4902]: E0121 15:00:00.161757 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161764 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4902]: E0121 15:00:00.161779 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-content" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161787 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="extract-content" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.161934 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae8d234-1e79-4509-be2f-286368c7e394" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.162518 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.165059 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.165240 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.180147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.231518 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.231584 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.231626 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.303569 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae8d234-1e79-4509-be2f-286368c7e394" path="/var/lib/kubelet/pods/cae8d234-1e79-4509-be2f-286368c7e394/volumes" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.333013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.333144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.333190 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.334393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.346952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.349153 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"collect-profiles-29483460-qn2th\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:00 crc kubenswrapper[4902]: I0121 15:00:00.482231 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.119793 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.692699 4902 generic.go:334] "Generic (PLEG): container finished" podID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerID="7ee1e059c9213e4cad45fc2396c6626d215288fb3b3b38f6079f8306a505e407" exitCode=0 Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.692892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" event={"ID":"0ada0d02-9902-4746-b1ad-42b3f9e711a7","Type":"ContainerDied","Data":"7ee1e059c9213e4cad45fc2396c6626d215288fb3b3b38f6079f8306a505e407"} Jan 21 15:00:01 crc kubenswrapper[4902]: I0121 15:00:01.693008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" event={"ID":"0ada0d02-9902-4746-b1ad-42b3f9e711a7","Type":"ContainerStarted","Data":"5510a8d9d406797dbc38fff442dbe2988144de0e86c45b04a74234038024e718"} Jan 21 15:00:02 crc kubenswrapper[4902]: I0121 15:00:02.984686 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.095749 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") pod \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.095800 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") pod \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.095854 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") pod \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\" (UID: \"0ada0d02-9902-4746-b1ad-42b3f9e711a7\") " Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.097031 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ada0d02-9902-4746-b1ad-42b3f9e711a7" (UID: "0ada0d02-9902-4746-b1ad-42b3f9e711a7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.101383 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ada0d02-9902-4746-b1ad-42b3f9e711a7" (UID: "0ada0d02-9902-4746-b1ad-42b3f9e711a7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.102645 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw" (OuterVolumeSpecName: "kube-api-access-wcjsw") pod "0ada0d02-9902-4746-b1ad-42b3f9e711a7" (UID: "0ada0d02-9902-4746-b1ad-42b3f9e711a7"). InnerVolumeSpecName "kube-api-access-wcjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.197505 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ada0d02-9902-4746-b1ad-42b3f9e711a7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.197537 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcjsw\" (UniqueName: \"kubernetes.io/projected/0ada0d02-9902-4746-b1ad-42b3f9e711a7-kube-api-access-wcjsw\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.197549 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ada0d02-9902-4746-b1ad-42b3f9e711a7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.709023 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" event={"ID":"0ada0d02-9902-4746-b1ad-42b3f9e711a7","Type":"ContainerDied","Data":"5510a8d9d406797dbc38fff442dbe2988144de0e86c45b04a74234038024e718"} Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.709096 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5510a8d9d406797dbc38fff442dbe2988144de0e86c45b04a74234038024e718" Jan 21 15:00:03 crc kubenswrapper[4902]: I0121 15:00:03.709093 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.657159 4902 scope.go:117] "RemoveContainer" containerID="2c30f8fcf44519868021b999009e6e0a364f65ba9bb5e12d8b816868d45e7ed6" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.684227 4902 scope.go:117] "RemoveContainer" containerID="b544fa374d13ef6e784a8d5d16f0cdb36de690b191b7cd286db841a786a83df0" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.736295 4902 scope.go:117] "RemoveContainer" containerID="51583e6b97e071d7cf96bdf513ff863344bb3712ef59fd993cdce4376b16aa3c" Jan 21 15:00:20 crc kubenswrapper[4902]: I0121 15:00:20.764737 4902 scope.go:117] "RemoveContainer" containerID="ae254c62b0513ec3d622f49b853707c7b475818d264ba6a9ceb8efcfd14f5993" Jan 21 15:01:17 crc kubenswrapper[4902]: I0121 15:01:17.769578 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:01:17 crc kubenswrapper[4902]: I0121 15:01:17.770214 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.844703 4902 scope.go:117] "RemoveContainer" containerID="184ed0c03e177484d5129302f45e661a1a2c46bd5bca5080444db5e2821f6ed4" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.877237 4902 scope.go:117] "RemoveContainer" containerID="40c9945717c6eed6957b84780ec6e3c2301b7187e2ec047124eab88f68c26607" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.896590 4902 scope.go:117] "RemoveContainer" containerID="670dee5a8d2ff2f59f49370b068ca6bd9c9b2aa28c545aa7b4fee5f803108537" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.929406 4902 scope.go:117] "RemoveContainer" containerID="616e23f05d0b14c7f93dad0c321acc148cd9b2f70ea9019e00391345fff5c7ec" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.977875 4902 scope.go:117] "RemoveContainer" containerID="a133e1c90783c83c710a2eea26c02cb7d28a759bac5a441a7e04e3644c54f5fe" Jan 21 15:01:20 crc kubenswrapper[4902]: I0121 15:01:20.993861 4902 scope.go:117] "RemoveContainer" containerID="e91c9182d83789cb593143e414372ebd78fcb513ff497dbf59abde2ed01e0281" Jan 21 15:01:21 crc kubenswrapper[4902]: I0121 15:01:21.031836 4902 scope.go:117] "RemoveContainer" containerID="7ba71046f87bc5f37e174f3f4e4802a75f487d3b7ef216e3060c7e05c5b07755" Jan 21 15:01:21 crc kubenswrapper[4902]: I0121 15:01:21.049736 4902 scope.go:117] "RemoveContainer" containerID="70aa2cf0840fc5f93cbebf841da43d8a387c82a6f9fae61768e764946c976710" Jan 21 15:01:47 crc kubenswrapper[4902]: I0121 15:01:47.769935 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:01:47 crc kubenswrapper[4902]: I0121 15:01:47.770830 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.769365 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.770061 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.770142 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.770950 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:02:17 crc kubenswrapper[4902]: I0121 15:02:17.771033 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" gracePeriod=600 Jan 21 15:02:17 crc kubenswrapper[4902]: E0121 15:02:17.878443 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6c85cc7_ee09_4640_ab22_ce79d086ad7a.slice/crio-conmon-b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6c85cc7_ee09_4640_ab22_ce79d086ad7a.slice/crio-b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:02:17 crc kubenswrapper[4902]: E0121 15:02:17.896932 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818204 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" exitCode=0 Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818244 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110"} Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818277 4902 scope.go:117] "RemoveContainer" containerID="faf0ff0caeac282dde2bef565f9dbd539a4c5633dd4c8ba54b6bd0e6704b0a61" Jan 21 15:02:18 crc kubenswrapper[4902]: I0121 15:02:18.818837 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:18 crc kubenswrapper[4902]: E0121 15:02:18.819154 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:21 crc kubenswrapper[4902]: I0121 15:02:21.153372 4902 scope.go:117] "RemoveContainer" containerID="878874319de7ae0b30076fed21352753826b954ce4e5342f533a40aa94a4f9e8" Jan 21 15:02:21 crc kubenswrapper[4902]: I0121 15:02:21.213489 4902 scope.go:117] "RemoveContainer" containerID="80f1113ebae178430104e31cb438bfd4b8237fd75e17bfe92c4d153d21a7d7b4" Jan 21 15:02:21 crc kubenswrapper[4902]: I0121 15:02:21.252129 4902 scope.go:117] "RemoveContainer" containerID="f49c0a85c7357d87bfb57238893040a47cac5bc0bd2e46a347d2884a529aa300" Jan 21 15:02:34 crc kubenswrapper[4902]: I0121 15:02:34.295294 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:34 crc kubenswrapper[4902]: E0121 15:02:34.296192 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:45 crc kubenswrapper[4902]: I0121 15:02:45.295022 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:45 crc kubenswrapper[4902]: E0121 15:02:45.296145 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:02:58 crc kubenswrapper[4902]: I0121 15:02:58.298710 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:02:58 crc kubenswrapper[4902]: E0121 15:02:58.299446 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:11 crc kubenswrapper[4902]: I0121 15:03:11.295179 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:11 crc kubenswrapper[4902]: E0121 15:03:11.295810 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:26 crc kubenswrapper[4902]: I0121 15:03:26.295300 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:26 crc kubenswrapper[4902]: E0121 15:03:26.296033 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:40 crc kubenswrapper[4902]: I0121 15:03:40.294581 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:40 crc kubenswrapper[4902]: E0121 15:03:40.295546 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.318449 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:44 crc kubenswrapper[4902]: E0121 15:03:44.319414 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerName="collect-profiles" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.319427 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerName="collect-profiles" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.319572 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" containerName="collect-profiles" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.321181 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.338746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.338897 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.339027 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.372828 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.439696 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.439824 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.439890 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.440629 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.440738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.471167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"redhat-operators-7qls4\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:44 crc kubenswrapper[4902]: I0121 15:03:44.651577 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.159947 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.925389 4902 generic.go:334] "Generic (PLEG): container finished" podID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerID="5fedad8f9fadbd6fbeb2cc7d7a45ba88db6a66a98f3529b9df3eea4163559678" exitCode=0 Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.925452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"5fedad8f9fadbd6fbeb2cc7d7a45ba88db6a66a98f3529b9df3eea4163559678"} Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.925486 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerStarted","Data":"706edc1d7426e3cd62c0ef63834715c5d190790b0c9569569ab948a674b91051"} Jan 21 15:03:45 crc kubenswrapper[4902]: I0121 15:03:45.927548 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.717366 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.719695 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.753695 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.873323 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.873422 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.873517 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975189 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975306 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975830 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:46 crc kubenswrapper[4902]: I0121 15:03:46.975924 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.003839 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"certified-operators-7vpk9\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.057592 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.348791 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:03:47 crc kubenswrapper[4902]: W0121 15:03:47.361175 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d2ff121_c8ec_43d3_b97d_e2f164b9f847.slice/crio-e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc WatchSource:0}: Error finding container e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc: Status 404 returned error can't find the container with id e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.942407 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" exitCode=0 Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.942463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e"} Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.942487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerStarted","Data":"e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc"} Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.945489 4902 generic.go:334] "Generic (PLEG): container finished" podID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerID="76f48702894400b1b02016cf71f8f4d7d8d3fc5d7d9bf5ccc45da8d5b224203b" exitCode=0 Jan 21 15:03:47 crc kubenswrapper[4902]: I0121 15:03:47.945525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"76f48702894400b1b02016cf71f8f4d7d8d3fc5d7d9bf5ccc45da8d5b224203b"} Jan 21 15:03:48 crc kubenswrapper[4902]: I0121 15:03:48.954005 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerStarted","Data":"fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549"} Jan 21 15:03:48 crc kubenswrapper[4902]: I0121 15:03:48.973622 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7qls4" podStartSLOduration=2.5471077429999998 podStartE2EDuration="4.97360404s" podCreationTimestamp="2026-01-21 15:03:44 +0000 UTC" firstStartedPulling="2026-01-21 15:03:45.927257311 +0000 UTC m=+1788.004090350" lastFinishedPulling="2026-01-21 15:03:48.353753618 +0000 UTC m=+1790.430586647" observedRunningTime="2026-01-21 15:03:48.968088093 +0000 UTC m=+1791.044921142" watchObservedRunningTime="2026-01-21 15:03:48.97360404 +0000 UTC m=+1791.050437069" Jan 21 15:03:52 crc kubenswrapper[4902]: I0121 15:03:52.985889 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerStarted","Data":"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0"} Jan 21 15:03:53 crc kubenswrapper[4902]: I0121 15:03:53.294154 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:03:53 crc kubenswrapper[4902]: E0121 15:03:53.294719 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:03:53 crc kubenswrapper[4902]: I0121 15:03:53.997897 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" exitCode=0 Jan 21 15:03:53 crc kubenswrapper[4902]: I0121 15:03:53.997966 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0"} Jan 21 15:03:54 crc kubenswrapper[4902]: I0121 15:03:54.652709 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:54 crc kubenswrapper[4902]: I0121 15:03:54.652961 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:54 crc kubenswrapper[4902]: I0121 15:03:54.692318 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:55 crc kubenswrapper[4902]: I0121 15:03:55.008012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerStarted","Data":"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec"} Jan 21 15:03:55 crc kubenswrapper[4902]: I0121 15:03:55.032055 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7vpk9" podStartSLOduration=2.576999078 podStartE2EDuration="9.03202514s" podCreationTimestamp="2026-01-21 15:03:46 +0000 UTC" firstStartedPulling="2026-01-21 15:03:47.945783292 +0000 UTC m=+1790.022616321" lastFinishedPulling="2026-01-21 15:03:54.400809354 +0000 UTC m=+1796.477642383" observedRunningTime="2026-01-21 15:03:55.029270757 +0000 UTC m=+1797.106103796" watchObservedRunningTime="2026-01-21 15:03:55.03202514 +0000 UTC m=+1797.108858169" Jan 21 15:03:55 crc kubenswrapper[4902]: I0121 15:03:55.058122 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:56 crc kubenswrapper[4902]: I0121 15:03:56.720136 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.020801 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7qls4" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" containerID="cri-o://fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549" gracePeriod=2 Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.058181 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.058250 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:57 crc kubenswrapper[4902]: I0121 15:03:57.110133 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.034783 4902 generic.go:334] "Generic (PLEG): container finished" podID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerID="fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549" exitCode=0 Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.035747 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549"} Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.112840 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.248178 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") pod \"42b124a0-69eb-423b-9303-c39fc8881a4d\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.248284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") pod \"42b124a0-69eb-423b-9303-c39fc8881a4d\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.248362 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") pod \"42b124a0-69eb-423b-9303-c39fc8881a4d\" (UID: \"42b124a0-69eb-423b-9303-c39fc8881a4d\") " Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.249264 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities" (OuterVolumeSpecName: "utilities") pod "42b124a0-69eb-423b-9303-c39fc8881a4d" (UID: "42b124a0-69eb-423b-9303-c39fc8881a4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.255277 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9" (OuterVolumeSpecName: "kube-api-access-k2hb9") pod "42b124a0-69eb-423b-9303-c39fc8881a4d" (UID: "42b124a0-69eb-423b-9303-c39fc8881a4d"). InnerVolumeSpecName "kube-api-access-k2hb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.350101 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:03:58 crc kubenswrapper[4902]: I0121 15:03:58.350141 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2hb9\" (UniqueName: \"kubernetes.io/projected/42b124a0-69eb-423b-9303-c39fc8881a4d-kube-api-access-k2hb9\") on node \"crc\" DevicePath \"\"" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.043326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7qls4" event={"ID":"42b124a0-69eb-423b-9303-c39fc8881a4d","Type":"ContainerDied","Data":"706edc1d7426e3cd62c0ef63834715c5d190790b0c9569569ab948a674b91051"} Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.043395 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7qls4" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.043393 4902 scope.go:117] "RemoveContainer" containerID="fe03e2c63c8b6e000dbdbfd4e692fc0ca2d4978df9701c9e736e8a878c1f5549" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.062531 4902 scope.go:117] "RemoveContainer" containerID="76f48702894400b1b02016cf71f8f4d7d8d3fc5d7d9bf5ccc45da8d5b224203b" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.095383 4902 scope.go:117] "RemoveContainer" containerID="5fedad8f9fadbd6fbeb2cc7d7a45ba88db6a66a98f3529b9df3eea4163559678" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.282442 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42b124a0-69eb-423b-9303-c39fc8881a4d" (UID: "42b124a0-69eb-423b-9303-c39fc8881a4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.368771 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b124a0-69eb-423b-9303-c39fc8881a4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.380967 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:03:59 crc kubenswrapper[4902]: I0121 15:03:59.389417 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7qls4"] Jan 21 15:04:00 crc kubenswrapper[4902]: I0121 15:04:00.303720 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" path="/var/lib/kubelet/pods/42b124a0-69eb-423b-9303-c39fc8881a4d/volumes" Jan 21 15:04:07 crc kubenswrapper[4902]: I0121 15:04:07.119218 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.299898 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:08 crc kubenswrapper[4902]: E0121 15:04:08.300264 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.304454 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.372523 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.372787 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-26g5j" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" containerID="cri-o://bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" gracePeriod=2 Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.753242 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.812357 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") pod \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.812454 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") pod \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.812482 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") pod \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\" (UID: \"9904001f-3d1f-494d-bfb6-5baa56f45c7b\") " Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.813970 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities" (OuterVolumeSpecName: "utilities") pod "9904001f-3d1f-494d-bfb6-5baa56f45c7b" (UID: "9904001f-3d1f-494d-bfb6-5baa56f45c7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.817706 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97" (OuterVolumeSpecName: "kube-api-access-vmh97") pod "9904001f-3d1f-494d-bfb6-5baa56f45c7b" (UID: "9904001f-3d1f-494d-bfb6-5baa56f45c7b"). InnerVolumeSpecName "kube-api-access-vmh97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.882194 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9904001f-3d1f-494d-bfb6-5baa56f45c7b" (UID: "9904001f-3d1f-494d-bfb6-5baa56f45c7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.913898 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.913935 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9904001f-3d1f-494d-bfb6-5baa56f45c7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:08 crc kubenswrapper[4902]: I0121 15:04:08.913949 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmh97\" (UniqueName: \"kubernetes.io/projected/9904001f-3d1f-494d-bfb6-5baa56f45c7b-kube-api-access-vmh97\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124347 4902 generic.go:334] "Generic (PLEG): container finished" podID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" exitCode=0 Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124397 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6"} Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124411 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-26g5j" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124429 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-26g5j" event={"ID":"9904001f-3d1f-494d-bfb6-5baa56f45c7b","Type":"ContainerDied","Data":"739b3544e777bebaead10779acdf44cab51721b0171dbd10be4cd7129f38efe6"} Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.124450 4902 scope.go:117] "RemoveContainer" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.146747 4902 scope.go:117] "RemoveContainer" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.189633 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.194830 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-26g5j"] Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.203558 4902 scope.go:117] "RemoveContainer" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232236 4902 scope.go:117] "RemoveContainer" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" Jan 21 15:04:09 crc kubenswrapper[4902]: E0121 15:04:09.232497 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6\": container with ID starting with bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6 not found: ID does not exist" containerID="bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232524 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6"} err="failed to get container status \"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6\": rpc error: code = NotFound desc = could not find container \"bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6\": container with ID starting with bdda8110ef80e2457707012571953999e6aaf0500c5346effbc07053df7ad7a6 not found: ID does not exist" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232545 4902 scope.go:117] "RemoveContainer" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" Jan 21 15:04:09 crc kubenswrapper[4902]: E0121 15:04:09.232718 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7\": container with ID starting with 324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7 not found: ID does not exist" containerID="324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232742 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7"} err="failed to get container status \"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7\": rpc error: code = NotFound desc = could not find container \"324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7\": container with ID starting with 324a5a076adc14dfa1ea7fccdb6783b2662ed72b0f0e9eef50d853de6bd34ce7 not found: ID does not exist" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.232757 4902 scope.go:117] "RemoveContainer" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" Jan 21 15:04:09 crc kubenswrapper[4902]: E0121 15:04:09.233110 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5\": container with ID starting with de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5 not found: ID does not exist" containerID="de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5" Jan 21 15:04:09 crc kubenswrapper[4902]: I0121 15:04:09.233131 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5"} err="failed to get container status \"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5\": rpc error: code = NotFound desc = could not find container \"de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5\": container with ID starting with de60c37f5710208f72f9f0715ff13efe88f49c259037aabe1c6d0e05acd832c5 not found: ID does not exist" Jan 21 15:04:10 crc kubenswrapper[4902]: I0121 15:04:10.303655 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" path="/var/lib/kubelet/pods/9904001f-3d1f-494d-bfb6-5baa56f45c7b/volumes" Jan 21 15:04:23 crc kubenswrapper[4902]: I0121 15:04:23.294643 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:23 crc kubenswrapper[4902]: E0121 15:04:23.295367 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:04:34 crc kubenswrapper[4902]: I0121 15:04:34.296304 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:34 crc kubenswrapper[4902]: E0121 15:04:34.297566 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:04:48 crc kubenswrapper[4902]: I0121 15:04:48.300318 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:04:48 crc kubenswrapper[4902]: E0121 15:04:48.301059 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:01 crc kubenswrapper[4902]: I0121 15:05:01.295276 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:01 crc kubenswrapper[4902]: E0121 15:05:01.297499 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:12 crc kubenswrapper[4902]: I0121 15:05:12.295687 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:12 crc kubenswrapper[4902]: E0121 15:05:12.297655 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:21 crc kubenswrapper[4902]: I0121 15:05:21.403851 4902 scope.go:117] "RemoveContainer" containerID="2cde6e75f7222b067d6b79b31a7ebe5313dd71bd0e2b68973e655c5cf6f0a600" Jan 21 15:05:24 crc kubenswrapper[4902]: I0121 15:05:24.295259 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:24 crc kubenswrapper[4902]: E0121 15:05:24.296132 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:39 crc kubenswrapper[4902]: I0121 15:05:39.294791 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:39 crc kubenswrapper[4902]: E0121 15:05:39.296509 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:05:52 crc kubenswrapper[4902]: I0121 15:05:52.295719 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:05:52 crc kubenswrapper[4902]: E0121 15:05:52.296621 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:04 crc kubenswrapper[4902]: I0121 15:06:04.295107 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:04 crc kubenswrapper[4902]: E0121 15:06:04.295526 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:18 crc kubenswrapper[4902]: I0121 15:06:18.304665 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:18 crc kubenswrapper[4902]: E0121 15:06:18.305513 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:21 crc kubenswrapper[4902]: I0121 15:06:21.439709 4902 scope.go:117] "RemoveContainer" containerID="3422abd8be70782bbc83b0962ea81e9a80ba6bb8f97ab843aad91552b7ac69ef" Jan 21 15:06:21 crc kubenswrapper[4902]: I0121 15:06:21.461439 4902 scope.go:117] "RemoveContainer" containerID="af29f10ace20181a510bff8176fb59c730f1f35a22ee04c32fe59d5d86239e27" Jan 21 15:06:33 crc kubenswrapper[4902]: I0121 15:06:33.294428 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:33 crc kubenswrapper[4902]: E0121 15:06:33.296127 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:06:47 crc kubenswrapper[4902]: I0121 15:06:47.295083 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:06:47 crc kubenswrapper[4902]: E0121 15:06:47.296323 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:07:01 crc kubenswrapper[4902]: I0121 15:07:01.294840 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:07:01 crc kubenswrapper[4902]: E0121 15:07:01.295570 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:07:16 crc kubenswrapper[4902]: I0121 15:07:16.294971 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:07:16 crc kubenswrapper[4902]: E0121 15:07:16.295584 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:07:29 crc kubenswrapper[4902]: I0121 15:07:29.295801 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:07:29 crc kubenswrapper[4902]: I0121 15:07:29.746859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa"} Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.596216 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597126 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597141 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597162 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597170 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597183 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597193 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="extract-content" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597213 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597232 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597239 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: E0121 15:09:22.597251 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597258 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="extract-utilities" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597420 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9904001f-3d1f-494d-bfb6-5baa56f45c7b" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.597436 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b124a0-69eb-423b-9303-c39fc8881a4d" containerName="registry-server" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.598620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.609714 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.705997 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.706075 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.706158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807320 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807396 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.807854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.808127 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.833883 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"redhat-marketplace-8p4nv\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:22 crc kubenswrapper[4902]: I0121 15:09:22.924410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.383417 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.691740 4902 generic.go:334] "Generic (PLEG): container finished" podID="23ad955a-b6d3-482a-808b-710ec9253c20" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" exitCode=0 Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.691814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab"} Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.692360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerStarted","Data":"e7625bdb50ae89961afbe71fc32892a8dc04a83d1cc81623c6be51fd71d594af"} Jan 21 15:09:23 crc kubenswrapper[4902]: I0121 15:09:23.694082 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:09:24 crc kubenswrapper[4902]: I0121 15:09:24.700793 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerStarted","Data":"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994"} Jan 21 15:09:25 crc kubenswrapper[4902]: I0121 15:09:25.708179 4902 generic.go:334] "Generic (PLEG): container finished" podID="23ad955a-b6d3-482a-808b-710ec9253c20" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" exitCode=0 Jan 21 15:09:25 crc kubenswrapper[4902]: I0121 15:09:25.708231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994"} Jan 21 15:09:26 crc kubenswrapper[4902]: I0121 15:09:26.718179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerStarted","Data":"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5"} Jan 21 15:09:26 crc kubenswrapper[4902]: I0121 15:09:26.754121 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8p4nv" podStartSLOduration=2.379950037 podStartE2EDuration="4.754091555s" podCreationTimestamp="2026-01-21 15:09:22 +0000 UTC" firstStartedPulling="2026-01-21 15:09:23.693782023 +0000 UTC m=+2125.770615062" lastFinishedPulling="2026-01-21 15:09:26.067923551 +0000 UTC m=+2128.144756580" observedRunningTime="2026-01-21 15:09:26.745885509 +0000 UTC m=+2128.822718538" watchObservedRunningTime="2026-01-21 15:09:26.754091555 +0000 UTC m=+2128.830924624" Jan 21 15:09:32 crc kubenswrapper[4902]: I0121 15:09:32.925307 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:32 crc kubenswrapper[4902]: I0121 15:09:32.925900 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:32 crc kubenswrapper[4902]: I0121 15:09:32.975061 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:33 crc kubenswrapper[4902]: I0121 15:09:33.818905 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:33 crc kubenswrapper[4902]: I0121 15:09:33.859459 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:35 crc kubenswrapper[4902]: I0121 15:09:35.784210 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8p4nv" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" containerID="cri-o://555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" gracePeriod=2 Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.758688 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792252 4902 generic.go:334] "Generic (PLEG): container finished" podID="23ad955a-b6d3-482a-808b-710ec9253c20" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" exitCode=0 Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5"} Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8p4nv" event={"ID":"23ad955a-b6d3-482a-808b-710ec9253c20","Type":"ContainerDied","Data":"e7625bdb50ae89961afbe71fc32892a8dc04a83d1cc81623c6be51fd71d594af"} Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792335 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8p4nv" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.792342 4902 scope.go:117] "RemoveContainer" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.820273 4902 scope.go:117] "RemoveContainer" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.835680 4902 scope.go:117] "RemoveContainer" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.836081 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") pod \"23ad955a-b6d3-482a-808b-710ec9253c20\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.836202 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") pod \"23ad955a-b6d3-482a-808b-710ec9253c20\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.836278 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") pod \"23ad955a-b6d3-482a-808b-710ec9253c20\" (UID: \"23ad955a-b6d3-482a-808b-710ec9253c20\") " Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.837403 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities" (OuterVolumeSpecName: "utilities") pod "23ad955a-b6d3-482a-808b-710ec9253c20" (UID: "23ad955a-b6d3-482a-808b-710ec9253c20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.837810 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.842564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5" (OuterVolumeSpecName: "kube-api-access-nv8w5") pod "23ad955a-b6d3-482a-808b-710ec9253c20" (UID: "23ad955a-b6d3-482a-808b-710ec9253c20"). InnerVolumeSpecName "kube-api-access-nv8w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.858780 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23ad955a-b6d3-482a-808b-710ec9253c20" (UID: "23ad955a-b6d3-482a-808b-710ec9253c20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.886142 4902 scope.go:117] "RemoveContainer" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" Jan 21 15:09:36 crc kubenswrapper[4902]: E0121 15:09:36.886712 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5\": container with ID starting with 555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5 not found: ID does not exist" containerID="555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.886752 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5"} err="failed to get container status \"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5\": rpc error: code = NotFound desc = could not find container \"555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5\": container with ID starting with 555b4e4062931f55f4772aabc96192c86d589b60fb1eeacc805adfd58d950cb5 not found: ID does not exist" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.886778 4902 scope.go:117] "RemoveContainer" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" Jan 21 15:09:36 crc kubenswrapper[4902]: E0121 15:09:36.887229 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994\": container with ID starting with bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994 not found: ID does not exist" containerID="bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.887265 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994"} err="failed to get container status \"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994\": rpc error: code = NotFound desc = could not find container \"bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994\": container with ID starting with bea5e0188992165e7e204738ff2257e4521d3d8edd7c83e51c56e93e5ce59994 not found: ID does not exist" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.887288 4902 scope.go:117] "RemoveContainer" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" Jan 21 15:09:36 crc kubenswrapper[4902]: E0121 15:09:36.887577 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab\": container with ID starting with 7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab not found: ID does not exist" containerID="7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.887665 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab"} err="failed to get container status \"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab\": rpc error: code = NotFound desc = could not find container \"7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab\": container with ID starting with 7f1775d7f1828462230ff3259b2721101d050c5f65b04c287360bd34060d38ab not found: ID does not exist" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.939321 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23ad955a-b6d3-482a-808b-710ec9253c20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:09:36 crc kubenswrapper[4902]: I0121 15:09:36.939582 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv8w5\" (UniqueName: \"kubernetes.io/projected/23ad955a-b6d3-482a-808b-710ec9253c20-kube-api-access-nv8w5\") on node \"crc\" DevicePath \"\"" Jan 21 15:09:37 crc kubenswrapper[4902]: I0121 15:09:37.122090 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:37 crc kubenswrapper[4902]: I0121 15:09:37.129648 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8p4nv"] Jan 21 15:09:38 crc kubenswrapper[4902]: I0121 15:09:38.309553 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" path="/var/lib/kubelet/pods/23ad955a-b6d3-482a-808b-710ec9253c20/volumes" Jan 21 15:09:47 crc kubenswrapper[4902]: I0121 15:09:47.770109 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:09:47 crc kubenswrapper[4902]: I0121 15:09:47.770723 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:10:17 crc kubenswrapper[4902]: I0121 15:10:17.769883 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:10:17 crc kubenswrapper[4902]: I0121 15:10:17.770934 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.770561 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.771230 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.771290 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.772117 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:10:47 crc kubenswrapper[4902]: I0121 15:10:47.772198 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa" gracePeriod=600 Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.300652 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa" exitCode=0 Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.307698 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa"} Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.307807 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27"} Jan 21 15:10:48 crc kubenswrapper[4902]: I0121 15:10:48.307827 4902 scope.go:117] "RemoveContainer" containerID="b7032011676a7523bf39bcf0225a4402b986cdac5a946f1672715e52582e5110" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.211663 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:09 crc kubenswrapper[4902]: E0121 15:11:09.213814 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-utilities" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.213961 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-utilities" Jan 21 15:11:09 crc kubenswrapper[4902]: E0121 15:11:09.214093 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-content" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.214191 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="extract-content" Jan 21 15:11:09 crc kubenswrapper[4902]: E0121 15:11:09.214313 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.214394 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.214659 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ad955a-b6d3-482a-808b-710ec9253c20" containerName="registry-server" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.216002 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.222596 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.321060 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-catalog-content\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.321113 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-utilities\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.321189 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2qw\" (UniqueName: \"kubernetes.io/projected/e5fe57c1-6b56-4abe-8067-dd74165e5937-kube-api-access-8z2qw\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.422249 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-catalog-content\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.422536 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-utilities\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.422636 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2qw\" (UniqueName: \"kubernetes.io/projected/e5fe57c1-6b56-4abe-8067-dd74165e5937-kube-api-access-8z2qw\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.423312 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-utilities\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.423551 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5fe57c1-6b56-4abe-8067-dd74165e5937-catalog-content\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.454750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2qw\" (UniqueName: \"kubernetes.io/projected/e5fe57c1-6b56-4abe-8067-dd74165e5937-kube-api-access-8z2qw\") pod \"community-operators-wjsh4\" (UID: \"e5fe57c1-6b56-4abe-8067-dd74165e5937\") " pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:09 crc kubenswrapper[4902]: I0121 15:11:09.532815 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.042532 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:10 crc kubenswrapper[4902]: W0121 15:11:10.049216 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5fe57c1_6b56_4abe_8067_dd74165e5937.slice/crio-89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2 WatchSource:0}: Error finding container 89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2: Status 404 returned error can't find the container with id 89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2 Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.469251 4902 generic.go:334] "Generic (PLEG): container finished" podID="e5fe57c1-6b56-4abe-8067-dd74165e5937" containerID="344d19118c86f03fbf2f993a19127bdedf8b15658b6934e157f5be424839cc2b" exitCode=0 Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.469301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerDied","Data":"344d19118c86f03fbf2f993a19127bdedf8b15658b6934e157f5be424839cc2b"} Jan 21 15:11:10 crc kubenswrapper[4902]: I0121 15:11:10.469327 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerStarted","Data":"89b7465d8dc818cfaa2e1689a3da79184e64fc5acdd505be8d6d0a84f726f3c2"} Jan 21 15:11:14 crc kubenswrapper[4902]: I0121 15:11:14.497031 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerStarted","Data":"34374407baf34d68a7781ace1d2fb9bc3bdb37a4c1cf7aa74f8ac2a1a35e8926"} Jan 21 15:11:15 crc kubenswrapper[4902]: I0121 15:11:15.509363 4902 generic.go:334] "Generic (PLEG): container finished" podID="e5fe57c1-6b56-4abe-8067-dd74165e5937" containerID="34374407baf34d68a7781ace1d2fb9bc3bdb37a4c1cf7aa74f8ac2a1a35e8926" exitCode=0 Jan 21 15:11:15 crc kubenswrapper[4902]: I0121 15:11:15.509423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerDied","Data":"34374407baf34d68a7781ace1d2fb9bc3bdb37a4c1cf7aa74f8ac2a1a35e8926"} Jan 21 15:11:16 crc kubenswrapper[4902]: I0121 15:11:16.517456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjsh4" event={"ID":"e5fe57c1-6b56-4abe-8067-dd74165e5937","Type":"ContainerStarted","Data":"19a8d1133a2e6685cc9bc7e7f47735379996d6da281f38f489fde9576b3c3a8b"} Jan 21 15:11:16 crc kubenswrapper[4902]: I0121 15:11:16.537291 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wjsh4" podStartSLOduration=2.091553252 podStartE2EDuration="7.537272809s" podCreationTimestamp="2026-01-21 15:11:09 +0000 UTC" firstStartedPulling="2026-01-21 15:11:10.471309725 +0000 UTC m=+2232.548142754" lastFinishedPulling="2026-01-21 15:11:15.917029272 +0000 UTC m=+2237.993862311" observedRunningTime="2026-01-21 15:11:16.53356349 +0000 UTC m=+2238.610396519" watchObservedRunningTime="2026-01-21 15:11:16.537272809 +0000 UTC m=+2238.614105838" Jan 21 15:11:19 crc kubenswrapper[4902]: I0121 15:11:19.534530 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:19 crc kubenswrapper[4902]: I0121 15:11:19.534825 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:19 crc kubenswrapper[4902]: I0121 15:11:19.585759 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.577750 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wjsh4" Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.646086 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjsh4"] Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.683424 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 15:11:29 crc kubenswrapper[4902]: I0121 15:11:29.684206 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wx2t6" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" containerID="cri-o://f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" gracePeriod=2 Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.769299 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.770021 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.770828 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:11:29 crc kubenswrapper[4902]: E0121 15:11:29.771104 4902 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-wx2t6" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:11:30 crc kubenswrapper[4902]: I0121 15:11:30.612932 4902 generic.go:334] "Generic (PLEG): container finished" podID="a1458bec-2134-4eb6-8510-ece2a6568215" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" exitCode=0 Jan 21 15:11:30 crc kubenswrapper[4902]: I0121 15:11:30.613012 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b"} Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.184935 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.330760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") pod \"a1458bec-2134-4eb6-8510-ece2a6568215\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.331322 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") pod \"a1458bec-2134-4eb6-8510-ece2a6568215\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.331372 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") pod \"a1458bec-2134-4eb6-8510-ece2a6568215\" (UID: \"a1458bec-2134-4eb6-8510-ece2a6568215\") " Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.334565 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities" (OuterVolumeSpecName: "utilities") pod "a1458bec-2134-4eb6-8510-ece2a6568215" (UID: "a1458bec-2134-4eb6-8510-ece2a6568215"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.346860 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq" (OuterVolumeSpecName: "kube-api-access-bxvzq") pod "a1458bec-2134-4eb6-8510-ece2a6568215" (UID: "a1458bec-2134-4eb6-8510-ece2a6568215"). InnerVolumeSpecName "kube-api-access-bxvzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.385613 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1458bec-2134-4eb6-8510-ece2a6568215" (UID: "a1458bec-2134-4eb6-8510-ece2a6568215"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.432323 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvzq\" (UniqueName: \"kubernetes.io/projected/a1458bec-2134-4eb6-8510-ece2a6568215-kube-api-access-bxvzq\") on node \"crc\" DevicePath \"\"" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.432365 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.432378 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1458bec-2134-4eb6-8510-ece2a6568215-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.622537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx2t6" event={"ID":"a1458bec-2134-4eb6-8510-ece2a6568215","Type":"ContainerDied","Data":"0c33f9b7fd46d05c8e52b7ed0e8c0e3ee3e633992cb415fa75bef4908ef2fa1f"} Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.623178 4902 scope.go:117] "RemoveContainer" containerID="f55942192334bed78a5ef126b5bef566b21361e0fd55643757cc52ec26452a2b" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.622606 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx2t6" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.651890 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.655141 4902 scope.go:117] "RemoveContainer" containerID="43adeb973bdbf05aa4340e69a147ab41031881fc3cf5bd920322ca643738ff13" Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.658314 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wx2t6"] Jan 21 15:11:31 crc kubenswrapper[4902]: I0121 15:11:31.684968 4902 scope.go:117] "RemoveContainer" containerID="75dbfffe1a292d59aebf0dda1372b5bf1cb539e9684f4315cb02199044a5774e" Jan 21 15:11:32 crc kubenswrapper[4902]: I0121 15:11:32.308237 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" path="/var/lib/kubelet/pods/a1458bec-2134-4eb6-8510-ece2a6568215/volumes" Jan 21 15:13:17 crc kubenswrapper[4902]: I0121 15:13:17.770027 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:13:17 crc kubenswrapper[4902]: I0121 15:13:17.770826 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.050213 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:45 crc kubenswrapper[4902]: E0121 15:13:45.051113 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051127 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:13:45 crc kubenswrapper[4902]: E0121 15:13:45.051146 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-content" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051153 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-content" Jan 21 15:13:45 crc kubenswrapper[4902]: E0121 15:13:45.051173 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-utilities" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051180 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="extract-utilities" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.051299 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1458bec-2134-4eb6-8510-ece2a6568215" containerName="registry-server" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.052889 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.089289 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.223421 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.223562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.223602 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.324751 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.324794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.325326 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.325443 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.325877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.352334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"redhat-operators-wzkws\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.381234 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:45 crc kubenswrapper[4902]: I0121 15:13:45.823278 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:46 crc kubenswrapper[4902]: I0121 15:13:46.099500 4902 generic.go:334] "Generic (PLEG): container finished" podID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" exitCode=0 Jan 21 15:13:46 crc kubenswrapper[4902]: I0121 15:13:46.099609 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2"} Jan 21 15:13:46 crc kubenswrapper[4902]: I0121 15:13:46.099837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerStarted","Data":"7379ea3435a22eb51df6ace035a8b6585b355ca8dc72320a5ee6641459c189cc"} Jan 21 15:13:47 crc kubenswrapper[4902]: I0121 15:13:47.107804 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerStarted","Data":"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d"} Jan 21 15:13:47 crc kubenswrapper[4902]: I0121 15:13:47.769630 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:13:47 crc kubenswrapper[4902]: I0121 15:13:47.770340 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:13:48 crc kubenswrapper[4902]: I0121 15:13:48.120226 4902 generic.go:334] "Generic (PLEG): container finished" podID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" exitCode=0 Jan 21 15:13:48 crc kubenswrapper[4902]: I0121 15:13:48.120268 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d"} Jan 21 15:13:49 crc kubenswrapper[4902]: I0121 15:13:49.130584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerStarted","Data":"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c"} Jan 21 15:13:49 crc kubenswrapper[4902]: I0121 15:13:49.157014 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wzkws" podStartSLOduration=1.7332472700000001 podStartE2EDuration="4.157000399s" podCreationTimestamp="2026-01-21 15:13:45 +0000 UTC" firstStartedPulling="2026-01-21 15:13:46.101544857 +0000 UTC m=+2388.178377896" lastFinishedPulling="2026-01-21 15:13:48.525297996 +0000 UTC m=+2390.602131025" observedRunningTime="2026-01-21 15:13:49.154812601 +0000 UTC m=+2391.231645650" watchObservedRunningTime="2026-01-21 15:13:49.157000399 +0000 UTC m=+2391.233833428" Jan 21 15:13:55 crc kubenswrapper[4902]: I0121 15:13:55.381912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:55 crc kubenswrapper[4902]: I0121 15:13:55.382329 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:55 crc kubenswrapper[4902]: I0121 15:13:55.444409 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:56 crc kubenswrapper[4902]: I0121 15:13:56.223184 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:56 crc kubenswrapper[4902]: I0121 15:13:56.267422 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:13:58 crc kubenswrapper[4902]: I0121 15:13:58.194240 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wzkws" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" containerID="cri-o://be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" gracePeriod=2 Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.760020 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.949000 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") pod \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.949384 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") pod \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.949565 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") pod \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\" (UID: \"f66198ce-ce00-4f3e-9c56-a90edc66a3d8\") " Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.950210 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities" (OuterVolumeSpecName: "utilities") pod "f66198ce-ce00-4f3e-9c56-a90edc66a3d8" (UID: "f66198ce-ce00-4f3e-9c56-a90edc66a3d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:13:59 crc kubenswrapper[4902]: I0121 15:13:59.956862 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7" (OuterVolumeSpecName: "kube-api-access-vzht7") pod "f66198ce-ce00-4f3e-9c56-a90edc66a3d8" (UID: "f66198ce-ce00-4f3e-9c56-a90edc66a3d8"). InnerVolumeSpecName "kube-api-access-vzht7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.051403 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.051716 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzht7\" (UniqueName: \"kubernetes.io/projected/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-kube-api-access-vzht7\") on node \"crc\" DevicePath \"\"" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.082111 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f66198ce-ce00-4f3e-9c56-a90edc66a3d8" (UID: "f66198ce-ce00-4f3e-9c56-a90edc66a3d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.153491 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66198ce-ce00-4f3e-9c56-a90edc66a3d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208473 4902 generic.go:334] "Generic (PLEG): container finished" podID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" exitCode=0 Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208515 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c"} Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208540 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wzkws" event={"ID":"f66198ce-ce00-4f3e-9c56-a90edc66a3d8","Type":"ContainerDied","Data":"7379ea3435a22eb51df6ace035a8b6585b355ca8dc72320a5ee6641459c189cc"} Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208557 4902 scope.go:117] "RemoveContainer" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.208567 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wzkws" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.226807 4902 scope.go:117] "RemoveContainer" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.244602 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.249471 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wzkws"] Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.261352 4902 scope.go:117] "RemoveContainer" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.301889 4902 scope.go:117] "RemoveContainer" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" Jan 21 15:14:00 crc kubenswrapper[4902]: E0121 15:14:00.302285 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c\": container with ID starting with be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c not found: ID does not exist" containerID="be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.302317 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c"} err="failed to get container status \"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c\": rpc error: code = NotFound desc = could not find container \"be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c\": container with ID starting with be57f4a7e8dd004590c77d4ce4805ad8de54289af190a600289c31256ca9f77c not found: ID does not exist" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.302337 4902 scope.go:117] "RemoveContainer" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.303818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" path="/var/lib/kubelet/pods/f66198ce-ce00-4f3e-9c56-a90edc66a3d8/volumes" Jan 21 15:14:00 crc kubenswrapper[4902]: E0121 15:14:00.304455 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d\": container with ID starting with 0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d not found: ID does not exist" containerID="0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.304505 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d"} err="failed to get container status \"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d\": rpc error: code = NotFound desc = could not find container \"0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d\": container with ID starting with 0b7438f5f642b13894e234d2a4415edb797dc4419116a8e7b05022b8f35c8d2d not found: ID does not exist" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.304536 4902 scope.go:117] "RemoveContainer" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" Jan 21 15:14:00 crc kubenswrapper[4902]: E0121 15:14:00.304869 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2\": container with ID starting with 7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2 not found: ID does not exist" containerID="7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2" Jan 21 15:14:00 crc kubenswrapper[4902]: I0121 15:14:00.304902 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2"} err="failed to get container status \"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2\": rpc error: code = NotFound desc = could not find container \"7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2\": container with ID starting with 7c5d2ef6cc3e7fd025ea4b5f10272723397e48b2fabc67e6be770d45577ea1c2 not found: ID does not exist" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.769916 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.770568 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.770619 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.771662 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:14:17 crc kubenswrapper[4902]: I0121 15:14:17.771737 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" gracePeriod=600 Jan 21 15:14:17 crc kubenswrapper[4902]: E0121 15:14:17.907738 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.361441 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" exitCode=0 Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.361520 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27"} Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.361818 4902 scope.go:117] "RemoveContainer" containerID="92fd37d3aa001b2164e48ad0a17e03e78770a1a688c0222739493af9ad719afa" Jan 21 15:14:18 crc kubenswrapper[4902]: I0121 15:14:18.363234 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:18 crc kubenswrapper[4902]: E0121 15:14:18.364105 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:29 crc kubenswrapper[4902]: I0121 15:14:29.295542 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:29 crc kubenswrapper[4902]: E0121 15:14:29.297785 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:44 crc kubenswrapper[4902]: I0121 15:14:44.295304 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:44 crc kubenswrapper[4902]: E0121 15:14:44.296318 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:14:58 crc kubenswrapper[4902]: I0121 15:14:58.299990 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:14:58 crc kubenswrapper[4902]: E0121 15:14:58.301490 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144035 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 15:15:00 crc kubenswrapper[4902]: E0121 15:15:00.144686 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144701 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4902]: E0121 15:15:00.144726 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144735 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4902]: E0121 15:15:00.144754 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.144763 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.145370 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66198ce-ce00-4f3e-9c56-a90edc66a3d8" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.145968 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.147815 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.154985 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.155822 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.165469 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.165567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.165598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.267132 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.267177 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.267221 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.268273 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.275131 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.288944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"collect-profiles-29483475-zk92d\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.467484 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:00 crc kubenswrapper[4902]: I0121 15:15:00.905836 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 15:15:01 crc kubenswrapper[4902]: I0121 15:15:01.696666 4902 generic.go:334] "Generic (PLEG): container finished" podID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerID="5f2cc1ae5d9e64887200b316f71af17b596d6725e436d2e46c7acd03a38f0c75" exitCode=0 Jan 21 15:15:01 crc kubenswrapper[4902]: I0121 15:15:01.696730 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" event={"ID":"bebd9484-ab72-4bbd-84e7-99f28795ad85","Type":"ContainerDied","Data":"5f2cc1ae5d9e64887200b316f71af17b596d6725e436d2e46c7acd03a38f0c75"} Jan 21 15:15:01 crc kubenswrapper[4902]: I0121 15:15:01.696963 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" event={"ID":"bebd9484-ab72-4bbd-84e7-99f28795ad85","Type":"ContainerStarted","Data":"18dd01f0dec26129b8ac3b01d72dd971536f6f32d360ef65d3a9bd90a2b6abfc"} Jan 21 15:15:02 crc kubenswrapper[4902]: I0121 15:15:02.974059 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.015932 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") pod \"bebd9484-ab72-4bbd-84e7-99f28795ad85\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.016094 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") pod \"bebd9484-ab72-4bbd-84e7-99f28795ad85\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.016124 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") pod \"bebd9484-ab72-4bbd-84e7-99f28795ad85\" (UID: \"bebd9484-ab72-4bbd-84e7-99f28795ad85\") " Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.017631 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume" (OuterVolumeSpecName: "config-volume") pod "bebd9484-ab72-4bbd-84e7-99f28795ad85" (UID: "bebd9484-ab72-4bbd-84e7-99f28795ad85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.022290 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bebd9484-ab72-4bbd-84e7-99f28795ad85" (UID: "bebd9484-ab72-4bbd-84e7-99f28795ad85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.022626 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j" (OuterVolumeSpecName: "kube-api-access-ls67j") pod "bebd9484-ab72-4bbd-84e7-99f28795ad85" (UID: "bebd9484-ab72-4bbd-84e7-99f28795ad85"). InnerVolumeSpecName "kube-api-access-ls67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.118130 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebd9484-ab72-4bbd-84e7-99f28795ad85-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.118181 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls67j\" (UniqueName: \"kubernetes.io/projected/bebd9484-ab72-4bbd-84e7-99f28795ad85-kube-api-access-ls67j\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.118196 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bebd9484-ab72-4bbd-84e7-99f28795ad85-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.713669 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" event={"ID":"bebd9484-ab72-4bbd-84e7-99f28795ad85","Type":"ContainerDied","Data":"18dd01f0dec26129b8ac3b01d72dd971536f6f32d360ef65d3a9bd90a2b6abfc"} Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.713720 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18dd01f0dec26129b8ac3b01d72dd971536f6f32d360ef65d3a9bd90a2b6abfc" Jan 21 15:15:03 crc kubenswrapper[4902]: I0121 15:15:03.713757 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d" Jan 21 15:15:04 crc kubenswrapper[4902]: I0121 15:15:04.042987 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 15:15:04 crc kubenswrapper[4902]: I0121 15:15:04.049638 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-xwzfw"] Jan 21 15:15:04 crc kubenswrapper[4902]: I0121 15:15:04.305887 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70656800-9429-43df-a1cb-7c8617d23b3f" path="/var/lib/kubelet/pods/70656800-9429-43df-a1cb-7c8617d23b3f/volumes" Jan 21 15:15:09 crc kubenswrapper[4902]: I0121 15:15:09.295199 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:09 crc kubenswrapper[4902]: E0121 15:15:09.296081 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:21 crc kubenswrapper[4902]: I0121 15:15:21.721989 4902 scope.go:117] "RemoveContainer" containerID="de8fcd8c3571217b412f9ba6c688fc875ba6c7c7eb18b7b87d8ab03820c43542" Jan 21 15:15:24 crc kubenswrapper[4902]: I0121 15:15:24.294898 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:24 crc kubenswrapper[4902]: E0121 15:15:24.295464 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:39 crc kubenswrapper[4902]: I0121 15:15:39.294870 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:39 crc kubenswrapper[4902]: E0121 15:15:39.295906 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.469088 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:15:52 crc kubenswrapper[4902]: E0121 15:15:52.469922 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerName="collect-profiles" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.469939 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerName="collect-profiles" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.470147 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" containerName="collect-profiles" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.471266 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.506302 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.629013 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.629690 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.629778 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.731591 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.730829 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.731745 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.732182 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.732328 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.757727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"certified-operators-548m6\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:52 crc kubenswrapper[4902]: I0121 15:15:52.800481 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:15:53 crc kubenswrapper[4902]: I0121 15:15:53.243670 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:15:53 crc kubenswrapper[4902]: W0121 15:15:53.248993 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b76b38_be5a_4672_b09e_478fd80b1c0c.slice/crio-674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff WatchSource:0}: Error finding container 674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff: Status 404 returned error can't find the container with id 674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff Jan 21 15:15:53 crc kubenswrapper[4902]: I0121 15:15:53.294999 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:15:53 crc kubenswrapper[4902]: E0121 15:15:53.295249 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.132157 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" exitCode=0 Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.132596 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4"} Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.132652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerStarted","Data":"674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff"} Jan 21 15:15:54 crc kubenswrapper[4902]: I0121 15:15:54.135357 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:15:56 crc kubenswrapper[4902]: I0121 15:15:56.148937 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" exitCode=0 Jan 21 15:15:56 crc kubenswrapper[4902]: I0121 15:15:56.149058 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205"} Jan 21 15:15:57 crc kubenswrapper[4902]: I0121 15:15:57.159964 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerStarted","Data":"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026"} Jan 21 15:15:57 crc kubenswrapper[4902]: I0121 15:15:57.179886 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-548m6" podStartSLOduration=2.722377917 podStartE2EDuration="5.179871183s" podCreationTimestamp="2026-01-21 15:15:52 +0000 UTC" firstStartedPulling="2026-01-21 15:15:54.135062019 +0000 UTC m=+2516.211895068" lastFinishedPulling="2026-01-21 15:15:56.592555305 +0000 UTC m=+2518.669388334" observedRunningTime="2026-01-21 15:15:57.177879417 +0000 UTC m=+2519.254712446" watchObservedRunningTime="2026-01-21 15:15:57.179871183 +0000 UTC m=+2519.256704212" Jan 21 15:16:02 crc kubenswrapper[4902]: I0121 15:16:02.800912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:02 crc kubenswrapper[4902]: I0121 15:16:02.802178 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:02 crc kubenswrapper[4902]: I0121 15:16:02.840523 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:03 crc kubenswrapper[4902]: I0121 15:16:03.268849 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:03 crc kubenswrapper[4902]: I0121 15:16:03.318743 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:16:05 crc kubenswrapper[4902]: I0121 15:16:05.221347 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-548m6" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" containerID="cri-o://72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" gracePeriod=2 Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.190082 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245193 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" exitCode=0 Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245241 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026"} Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245272 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548m6" event={"ID":"c3b76b38-be5a-4672-b09e-478fd80b1c0c","Type":"ContainerDied","Data":"674d9e4d0b5e1c13514bc938797d4379dd0b8c270486271e8d0cf4d945d5cdff"} Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245293 4902 scope.go:117] "RemoveContainer" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.245320 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548m6" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.265710 4902 scope.go:117] "RemoveContainer" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.282722 4902 scope.go:117] "RemoveContainer" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.302640 4902 scope.go:117] "RemoveContainer" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" Jan 21 15:16:06 crc kubenswrapper[4902]: E0121 15:16:06.303447 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026\": container with ID starting with 72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026 not found: ID does not exist" containerID="72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303486 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026"} err="failed to get container status \"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026\": rpc error: code = NotFound desc = could not find container \"72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026\": container with ID starting with 72707d29cc6343b59a6d71543da6e3d64c5b32693e4f2f28c04854e1eac1a026 not found: ID does not exist" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303514 4902 scope.go:117] "RemoveContainer" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" Jan 21 15:16:06 crc kubenswrapper[4902]: E0121 15:16:06.303957 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205\": container with ID starting with e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205 not found: ID does not exist" containerID="e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303984 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205"} err="failed to get container status \"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205\": rpc error: code = NotFound desc = could not find container \"e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205\": container with ID starting with e523a31341dc80f85707e2fda6607e76debaecec3798bd5ba267c4e0b129d205 not found: ID does not exist" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.303998 4902 scope.go:117] "RemoveContainer" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" Jan 21 15:16:06 crc kubenswrapper[4902]: E0121 15:16:06.305185 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4\": container with ID starting with 584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4 not found: ID does not exist" containerID="584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.305210 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4"} err="failed to get container status \"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4\": rpc error: code = NotFound desc = could not find container \"584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4\": container with ID starting with 584668f054666391ea99936131b6f15547c60f5d701a7218a6b40f17a4a3fbf4 not found: ID does not exist" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.345669 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") pod \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.345729 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") pod \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.345757 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") pod \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\" (UID: \"c3b76b38-be5a-4672-b09e-478fd80b1c0c\") " Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.347107 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities" (OuterVolumeSpecName: "utilities") pod "c3b76b38-be5a-4672-b09e-478fd80b1c0c" (UID: "c3b76b38-be5a-4672-b09e-478fd80b1c0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.352232 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc" (OuterVolumeSpecName: "kube-api-access-7d4sc") pod "c3b76b38-be5a-4672-b09e-478fd80b1c0c" (UID: "c3b76b38-be5a-4672-b09e-478fd80b1c0c"). InnerVolumeSpecName "kube-api-access-7d4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.398392 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b76b38-be5a-4672-b09e-478fd80b1c0c" (UID: "c3b76b38-be5a-4672-b09e-478fd80b1c0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.447472 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.447502 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b76b38-be5a-4672-b09e-478fd80b1c0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.447514 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d4sc\" (UniqueName: \"kubernetes.io/projected/c3b76b38-be5a-4672-b09e-478fd80b1c0c-kube-api-access-7d4sc\") on node \"crc\" DevicePath \"\"" Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.602200 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:16:06 crc kubenswrapper[4902]: I0121 15:16:06.609849 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-548m6"] Jan 21 15:16:07 crc kubenswrapper[4902]: I0121 15:16:07.295316 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:07 crc kubenswrapper[4902]: E0121 15:16:07.295757 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:16:08 crc kubenswrapper[4902]: I0121 15:16:08.304180 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" path="/var/lib/kubelet/pods/c3b76b38-be5a-4672-b09e-478fd80b1c0c/volumes" Jan 21 15:16:22 crc kubenswrapper[4902]: I0121 15:16:22.295270 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:22 crc kubenswrapper[4902]: E0121 15:16:22.295971 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:16:36 crc kubenswrapper[4902]: I0121 15:16:36.295058 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:36 crc kubenswrapper[4902]: E0121 15:16:36.295859 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:16:48 crc kubenswrapper[4902]: I0121 15:16:48.298864 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:16:48 crc kubenswrapper[4902]: E0121 15:16:48.299345 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:01 crc kubenswrapper[4902]: I0121 15:17:01.294918 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:01 crc kubenswrapper[4902]: E0121 15:17:01.295611 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:14 crc kubenswrapper[4902]: I0121 15:17:14.294864 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:14 crc kubenswrapper[4902]: E0121 15:17:14.296021 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:28 crc kubenswrapper[4902]: I0121 15:17:28.299956 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:28 crc kubenswrapper[4902]: E0121 15:17:28.300931 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:42 crc kubenswrapper[4902]: I0121 15:17:42.295201 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:42 crc kubenswrapper[4902]: E0121 15:17:42.295793 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:17:54 crc kubenswrapper[4902]: I0121 15:17:54.294604 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:17:54 crc kubenswrapper[4902]: E0121 15:17:54.295395 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:08 crc kubenswrapper[4902]: I0121 15:18:08.300678 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:08 crc kubenswrapper[4902]: E0121 15:18:08.301568 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:21 crc kubenswrapper[4902]: I0121 15:18:21.294966 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:21 crc kubenswrapper[4902]: E0121 15:18:21.295810 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:36 crc kubenswrapper[4902]: I0121 15:18:36.295000 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:36 crc kubenswrapper[4902]: E0121 15:18:36.295975 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:18:50 crc kubenswrapper[4902]: I0121 15:18:50.295489 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:18:50 crc kubenswrapper[4902]: E0121 15:18:50.296302 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:19:02 crc kubenswrapper[4902]: I0121 15:19:02.295231 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:19:02 crc kubenswrapper[4902]: E0121 15:19:02.295885 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:19:13 crc kubenswrapper[4902]: I0121 15:19:13.295486 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:19:13 crc kubenswrapper[4902]: E0121 15:19:13.296182 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:19:25 crc kubenswrapper[4902]: I0121 15:19:25.295397 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:19:25 crc kubenswrapper[4902]: I0121 15:19:25.771639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9"} Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.423691 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:30 crc kubenswrapper[4902]: E0121 15:19:30.424599 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-content" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424618 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-content" Jan 21 15:19:30 crc kubenswrapper[4902]: E0121 15:19:30.424660 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-utilities" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424669 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="extract-utilities" Jan 21 15:19:30 crc kubenswrapper[4902]: E0121 15:19:30.424681 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424691 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.424815 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b76b38-be5a-4672-b09e-478fd80b1c0c" containerName="registry-server" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.426944 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.437887 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.514679 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.514794 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.514837 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.615675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.615744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.615766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.616260 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.616479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.634284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"redhat-marketplace-znnbw\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:30 crc kubenswrapper[4902]: I0121 15:19:30.742966 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.155478 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.811984 4902 generic.go:334] "Generic (PLEG): container finished" podID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" exitCode=0 Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.812118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee"} Jan 21 15:19:31 crc kubenswrapper[4902]: I0121 15:19:31.812429 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerStarted","Data":"7af97d4ed275a1ae5c9629fc436df7ed6ef28556298e74dd885f31e365b940b5"} Jan 21 15:19:33 crc kubenswrapper[4902]: I0121 15:19:33.828246 4902 generic.go:334] "Generic (PLEG): container finished" podID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" exitCode=0 Jan 21 15:19:33 crc kubenswrapper[4902]: I0121 15:19:33.828306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba"} Jan 21 15:19:34 crc kubenswrapper[4902]: I0121 15:19:34.837788 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerStarted","Data":"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57"} Jan 21 15:19:34 crc kubenswrapper[4902]: I0121 15:19:34.856247 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znnbw" podStartSLOduration=2.451734481 podStartE2EDuration="4.856230214s" podCreationTimestamp="2026-01-21 15:19:30 +0000 UTC" firstStartedPulling="2026-01-21 15:19:31.813850453 +0000 UTC m=+2733.890683482" lastFinishedPulling="2026-01-21 15:19:34.218346186 +0000 UTC m=+2736.295179215" observedRunningTime="2026-01-21 15:19:34.853582228 +0000 UTC m=+2736.930415267" watchObservedRunningTime="2026-01-21 15:19:34.856230214 +0000 UTC m=+2736.933063243" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.743731 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.744351 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.798309 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:40 crc kubenswrapper[4902]: I0121 15:19:40.932098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:41 crc kubenswrapper[4902]: I0121 15:19:41.046060 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:42 crc kubenswrapper[4902]: I0121 15:19:42.895733 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znnbw" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" containerID="cri-o://254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" gracePeriod=2 Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.367904 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.521143 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") pod \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.521280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") pod \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.521307 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") pod \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\" (UID: \"fc75a5cf-c2f6-4ec4-bb1b-715732baded5\") " Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.522004 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities" (OuterVolumeSpecName: "utilities") pod "fc75a5cf-c2f6-4ec4-bb1b-715732baded5" (UID: "fc75a5cf-c2f6-4ec4-bb1b-715732baded5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.539454 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl" (OuterVolumeSpecName: "kube-api-access-wzgxl") pod "fc75a5cf-c2f6-4ec4-bb1b-715732baded5" (UID: "fc75a5cf-c2f6-4ec4-bb1b-715732baded5"). InnerVolumeSpecName "kube-api-access-wzgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.549006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc75a5cf-c2f6-4ec4-bb1b-715732baded5" (UID: "fc75a5cf-c2f6-4ec4-bb1b-715732baded5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.622847 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.622901 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgxl\" (UniqueName: \"kubernetes.io/projected/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-kube-api-access-wzgxl\") on node \"crc\" DevicePath \"\"" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.622919 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc75a5cf-c2f6-4ec4-bb1b-715732baded5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905107 4902 generic.go:334] "Generic (PLEG): container finished" podID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" exitCode=0 Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905165 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znnbw" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57"} Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905531 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znnbw" event={"ID":"fc75a5cf-c2f6-4ec4-bb1b-715732baded5","Type":"ContainerDied","Data":"7af97d4ed275a1ae5c9629fc436df7ed6ef28556298e74dd885f31e365b940b5"} Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.905574 4902 scope.go:117] "RemoveContainer" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.928287 4902 scope.go:117] "RemoveContainer" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.953495 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.956420 4902 scope.go:117] "RemoveContainer" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.958201 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znnbw"] Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.969844 4902 scope.go:117] "RemoveContainer" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" Jan 21 15:19:43 crc kubenswrapper[4902]: E0121 15:19:43.970377 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57\": container with ID starting with 254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57 not found: ID does not exist" containerID="254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970432 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57"} err="failed to get container status \"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57\": rpc error: code = NotFound desc = could not find container \"254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57\": container with ID starting with 254b6d65bfafee2fbf50f1c78d79891071db1e71c91fbef0b69e210982cf7d57 not found: ID does not exist" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970461 4902 scope.go:117] "RemoveContainer" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" Jan 21 15:19:43 crc kubenswrapper[4902]: E0121 15:19:43.970851 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba\": container with ID starting with 3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba not found: ID does not exist" containerID="3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970895 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba"} err="failed to get container status \"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba\": rpc error: code = NotFound desc = could not find container \"3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba\": container with ID starting with 3ad8c3582399a71daeb81b537ebc1fefe326cc6c0e9ef9d37eb4b7187ba1d0ba not found: ID does not exist" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.970932 4902 scope.go:117] "RemoveContainer" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" Jan 21 15:19:43 crc kubenswrapper[4902]: E0121 15:19:43.971269 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee\": container with ID starting with a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee not found: ID does not exist" containerID="a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee" Jan 21 15:19:43 crc kubenswrapper[4902]: I0121 15:19:43.971317 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee"} err="failed to get container status \"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee\": rpc error: code = NotFound desc = could not find container \"a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee\": container with ID starting with a0619e3491245d9ba483a1c1f53c32fc39c899fefbe01cac1f03c55c440659ee not found: ID does not exist" Jan 21 15:19:44 crc kubenswrapper[4902]: I0121 15:19:44.309875 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" path="/var/lib/kubelet/pods/fc75a5cf-c2f6-4ec4-bb1b-715732baded5/volumes" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.125668 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:12 crc kubenswrapper[4902]: E0121 15:21:12.126511 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-content" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126527 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-content" Jan 21 15:21:12 crc kubenswrapper[4902]: E0121 15:21:12.126554 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126563 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" Jan 21 15:21:12 crc kubenswrapper[4902]: E0121 15:21:12.126587 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-utilities" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126596 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="extract-utilities" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.126768 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc75a5cf-c2f6-4ec4-bb1b-715732baded5" containerName="registry-server" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.147733 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.162088 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.176853 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.176913 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.177024 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.278563 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.278623 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.279284 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.279298 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.279311 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.300999 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"community-operators-5jt5k\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.486649 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:12 crc kubenswrapper[4902]: I0121 15:21:12.952990 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:12 crc kubenswrapper[4902]: W0121 15:21:12.960110 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a0f3d6_d1a4_40f2_bfa2_25e435864950.slice/crio-2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900 WatchSource:0}: Error finding container 2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900: Status 404 returned error can't find the container with id 2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900 Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.601970 4902 generic.go:334] "Generic (PLEG): container finished" podID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" exitCode=0 Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.602010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065"} Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.602036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerStarted","Data":"2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900"} Jan 21 15:21:13 crc kubenswrapper[4902]: I0121 15:21:13.605881 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:21:14 crc kubenswrapper[4902]: I0121 15:21:14.615780 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerStarted","Data":"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd"} Jan 21 15:21:15 crc kubenswrapper[4902]: I0121 15:21:15.626894 4902 generic.go:334] "Generic (PLEG): container finished" podID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" exitCode=0 Jan 21 15:21:15 crc kubenswrapper[4902]: I0121 15:21:15.626946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd"} Jan 21 15:21:16 crc kubenswrapper[4902]: I0121 15:21:16.637206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerStarted","Data":"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f"} Jan 21 15:21:16 crc kubenswrapper[4902]: I0121 15:21:16.660680 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jt5k" podStartSLOduration=1.902132151 podStartE2EDuration="4.660662566s" podCreationTimestamp="2026-01-21 15:21:12 +0000 UTC" firstStartedPulling="2026-01-21 15:21:13.60515322 +0000 UTC m=+2835.681986289" lastFinishedPulling="2026-01-21 15:21:16.363683675 +0000 UTC m=+2838.440516704" observedRunningTime="2026-01-21 15:21:16.657466425 +0000 UTC m=+2838.734299464" watchObservedRunningTime="2026-01-21 15:21:16.660662566 +0000 UTC m=+2838.737495595" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.487100 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.487980 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.533528 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:22 crc kubenswrapper[4902]: I0121 15:21:22.747232 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:23 crc kubenswrapper[4902]: I0121 15:21:23.910398 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:24 crc kubenswrapper[4902]: I0121 15:21:24.692943 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jt5k" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" containerID="cri-o://958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" gracePeriod=2 Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.621581 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700670 4902 generic.go:334] "Generic (PLEG): container finished" podID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" exitCode=0 Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700710 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f"} Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700734 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jt5k" event={"ID":"49a0f3d6-d1a4-40f2-bfa2-25e435864950","Type":"ContainerDied","Data":"2fff381c71980f38e99d9c08a1725ee6490e2b20f384b83405ef046942fdb900"} Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700769 4902 scope.go:117] "RemoveContainer" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.700884 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jt5k" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.720561 4902 scope.go:117] "RemoveContainer" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.740016 4902 scope.go:117] "RemoveContainer" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.778461 4902 scope.go:117] "RemoveContainer" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" Jan 21 15:21:25 crc kubenswrapper[4902]: E0121 15:21:25.779192 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f\": container with ID starting with 958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f not found: ID does not exist" containerID="958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779234 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f"} err="failed to get container status \"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f\": rpc error: code = NotFound desc = could not find container \"958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f\": container with ID starting with 958e024a03cc9418d4c07fe9177fdbb9d49d304f696981963c7b5a41fb57569f not found: ID does not exist" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779266 4902 scope.go:117] "RemoveContainer" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" Jan 21 15:21:25 crc kubenswrapper[4902]: E0121 15:21:25.779517 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd\": container with ID starting with fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd not found: ID does not exist" containerID="fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779541 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd"} err="failed to get container status \"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd\": rpc error: code = NotFound desc = could not find container \"fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd\": container with ID starting with fff241d35cf1b860a0866179911b56484a74d4995ec07197fbf4bfaee3d90ddd not found: ID does not exist" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779558 4902 scope.go:117] "RemoveContainer" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" Jan 21 15:21:25 crc kubenswrapper[4902]: E0121 15:21:25.779850 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065\": container with ID starting with 14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065 not found: ID does not exist" containerID="14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.779866 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065"} err="failed to get container status \"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065\": rpc error: code = NotFound desc = could not find container \"14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065\": container with ID starting with 14d871613afdc9d268fe054a680ca6d69258d71c162792b607ec10b8abbd3065 not found: ID does not exist" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.783622 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") pod \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.783717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") pod \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.783783 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") pod \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\" (UID: \"49a0f3d6-d1a4-40f2-bfa2-25e435864950\") " Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.786599 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities" (OuterVolumeSpecName: "utilities") pod "49a0f3d6-d1a4-40f2-bfa2-25e435864950" (UID: "49a0f3d6-d1a4-40f2-bfa2-25e435864950"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.792467 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg" (OuterVolumeSpecName: "kube-api-access-j8gmg") pod "49a0f3d6-d1a4-40f2-bfa2-25e435864950" (UID: "49a0f3d6-d1a4-40f2-bfa2-25e435864950"). InnerVolumeSpecName "kube-api-access-j8gmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.837180 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49a0f3d6-d1a4-40f2-bfa2-25e435864950" (UID: "49a0f3d6-d1a4-40f2-bfa2-25e435864950"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.885675 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gmg\" (UniqueName: \"kubernetes.io/projected/49a0f3d6-d1a4-40f2-bfa2-25e435864950-kube-api-access-j8gmg\") on node \"crc\" DevicePath \"\"" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.885707 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:21:25 crc kubenswrapper[4902]: I0121 15:21:25.885716 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a0f3d6-d1a4-40f2-bfa2-25e435864950-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:21:26 crc kubenswrapper[4902]: I0121 15:21:26.034734 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:26 crc kubenswrapper[4902]: I0121 15:21:26.039735 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jt5k"] Jan 21 15:21:26 crc kubenswrapper[4902]: I0121 15:21:26.306994 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" path="/var/lib/kubelet/pods/49a0f3d6-d1a4-40f2-bfa2-25e435864950/volumes" Jan 21 15:21:47 crc kubenswrapper[4902]: I0121 15:21:47.770185 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:21:47 crc kubenswrapper[4902]: I0121 15:21:47.770823 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:22:17 crc kubenswrapper[4902]: I0121 15:22:17.770493 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:22:17 crc kubenswrapper[4902]: I0121 15:22:17.771402 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.769908 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.770522 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.770598 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.771644 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:22:47 crc kubenswrapper[4902]: I0121 15:22:47.771774 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9" gracePeriod=600 Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357254 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9" exitCode=0 Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9"} Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357575 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e"} Jan 21 15:22:48 crc kubenswrapper[4902]: I0121 15:22:48.357618 4902 scope.go:117] "RemoveContainer" containerID="f1c7da8156948865b315737625dc7abb2668a51f59a761dbc7794977e288de27" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.758030 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:06 crc kubenswrapper[4902]: E0121 15:25:06.760605 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-content" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760627 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-content" Jan 21 15:25:06 crc kubenswrapper[4902]: E0121 15:25:06.760636 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760642 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" Jan 21 15:25:06 crc kubenswrapper[4902]: E0121 15:25:06.760654 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-utilities" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760661 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="extract-utilities" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.760831 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a0f3d6-d1a4-40f2-bfa2-25e435864950" containerName="registry-server" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.762028 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.774557 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.857937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.858086 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.858127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.958855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.958933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.958960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.959878 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.960213 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:06 crc kubenswrapper[4902]: I0121 15:25:06.980432 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"redhat-operators-jh5w8\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:07 crc kubenswrapper[4902]: I0121 15:25:07.095520 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:07 crc kubenswrapper[4902]: I0121 15:25:07.526246 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:07 crc kubenswrapper[4902]: W0121 15:25:07.530670 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5224a84_9644_4cdc_b3c4_eed2488ae61d.slice/crio-0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1 WatchSource:0}: Error finding container 0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1: Status 404 returned error can't find the container with id 0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1 Jan 21 15:25:08 crc kubenswrapper[4902]: I0121 15:25:08.483077 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" exitCode=0 Jan 21 15:25:08 crc kubenswrapper[4902]: I0121 15:25:08.483239 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811"} Jan 21 15:25:08 crc kubenswrapper[4902]: I0121 15:25:08.483361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerStarted","Data":"0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1"} Jan 21 15:25:10 crc kubenswrapper[4902]: I0121 15:25:10.501478 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" exitCode=0 Jan 21 15:25:10 crc kubenswrapper[4902]: I0121 15:25:10.501525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85"} Jan 21 15:25:11 crc kubenswrapper[4902]: I0121 15:25:11.510070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerStarted","Data":"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581"} Jan 21 15:25:11 crc kubenswrapper[4902]: I0121 15:25:11.525617 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jh5w8" podStartSLOduration=2.743580443 podStartE2EDuration="5.525597639s" podCreationTimestamp="2026-01-21 15:25:06 +0000 UTC" firstStartedPulling="2026-01-21 15:25:08.484392747 +0000 UTC m=+3070.561225776" lastFinishedPulling="2026-01-21 15:25:11.266409933 +0000 UTC m=+3073.343242972" observedRunningTime="2026-01-21 15:25:11.525598769 +0000 UTC m=+3073.602431798" watchObservedRunningTime="2026-01-21 15:25:11.525597639 +0000 UTC m=+3073.602430668" Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.097507 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.098134 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.769952 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:25:17 crc kubenswrapper[4902]: I0121 15:25:17.770118 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:25:18 crc kubenswrapper[4902]: I0121 15:25:18.150841 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jh5w8" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" probeResult="failure" output=< Jan 21 15:25:18 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 15:25:18 crc kubenswrapper[4902]: > Jan 21 15:25:27 crc kubenswrapper[4902]: I0121 15:25:27.136764 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:27 crc kubenswrapper[4902]: I0121 15:25:27.176998 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:27 crc kubenswrapper[4902]: I0121 15:25:27.367020 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:28 crc kubenswrapper[4902]: I0121 15:25:28.754899 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jh5w8" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" containerID="cri-o://3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" gracePeriod=2 Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.672926 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.680224 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") pod \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.680283 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") pod \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.680322 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") pod \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\" (UID: \"f5224a84-9644-4cdc-b3c4-eed2488ae61d\") " Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.681572 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities" (OuterVolumeSpecName: "utilities") pod "f5224a84-9644-4cdc-b3c4-eed2488ae61d" (UID: "f5224a84-9644-4cdc-b3c4-eed2488ae61d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.686524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj" (OuterVolumeSpecName: "kube-api-access-2cfqj") pod "f5224a84-9644-4cdc-b3c4-eed2488ae61d" (UID: "f5224a84-9644-4cdc-b3c4-eed2488ae61d"). InnerVolumeSpecName "kube-api-access-2cfqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766698 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" exitCode=0 Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766736 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581"} Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766760 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh5w8" event={"ID":"f5224a84-9644-4cdc-b3c4-eed2488ae61d","Type":"ContainerDied","Data":"0b5459aee9b23bbb40e190f872efd622cfffaa20e0ae6affbe426245a2f1dda1"} Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766777 4902 scope.go:117] "RemoveContainer" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.766929 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh5w8" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.783325 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cfqj\" (UniqueName: \"kubernetes.io/projected/f5224a84-9644-4cdc-b3c4-eed2488ae61d-kube-api-access-2cfqj\") on node \"crc\" DevicePath \"\"" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.783359 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.797361 4902 scope.go:117] "RemoveContainer" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.825439 4902 scope.go:117] "RemoveContainer" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.832248 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5224a84-9644-4cdc-b3c4-eed2488ae61d" (UID: "f5224a84-9644-4cdc-b3c4-eed2488ae61d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.846947 4902 scope.go:117] "RemoveContainer" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" Jan 21 15:25:29 crc kubenswrapper[4902]: E0121 15:25:29.847258 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581\": container with ID starting with 3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581 not found: ID does not exist" containerID="3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847295 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581"} err="failed to get container status \"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581\": rpc error: code = NotFound desc = could not find container \"3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581\": container with ID starting with 3958248c8ae7dc076e78ea85aeafdd48f3f020145370088acf1fad950f44f581 not found: ID does not exist" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847320 4902 scope.go:117] "RemoveContainer" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" Jan 21 15:25:29 crc kubenswrapper[4902]: E0121 15:25:29.847642 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85\": container with ID starting with cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85 not found: ID does not exist" containerID="cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847675 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85"} err="failed to get container status \"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85\": rpc error: code = NotFound desc = could not find container \"cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85\": container with ID starting with cf3c1d1246b4fb81294d95f7d00a39205f7f983eaab1d058a8fa6a8897855e85 not found: ID does not exist" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.847696 4902 scope.go:117] "RemoveContainer" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" Jan 21 15:25:29 crc kubenswrapper[4902]: E0121 15:25:29.848299 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811\": container with ID starting with 8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811 not found: ID does not exist" containerID="8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.848325 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811"} err="failed to get container status \"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811\": rpc error: code = NotFound desc = could not find container \"8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811\": container with ID starting with 8a5f95d5679b15fc65d6a7b55c9c852a36827e939807e21416015d5fc062d811 not found: ID does not exist" Jan 21 15:25:29 crc kubenswrapper[4902]: I0121 15:25:29.885253 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5224a84-9644-4cdc-b3c4-eed2488ae61d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:25:30 crc kubenswrapper[4902]: I0121 15:25:30.120218 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:30 crc kubenswrapper[4902]: I0121 15:25:30.126219 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jh5w8"] Jan 21 15:25:30 crc kubenswrapper[4902]: I0121 15:25:30.308449 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" path="/var/lib/kubelet/pods/f5224a84-9644-4cdc-b3c4-eed2488ae61d/volumes" Jan 21 15:25:47 crc kubenswrapper[4902]: I0121 15:25:47.769782 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:25:47 crc kubenswrapper[4902]: I0121 15:25:47.770520 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.770771 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.771536 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.771614 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.772657 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:26:17 crc kubenswrapper[4902]: I0121 15:26:17.772745 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" gracePeriod=600 Jan 21 15:26:17 crc kubenswrapper[4902]: E0121 15:26:17.894677 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.162373 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" exitCode=0 Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.162722 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e"} Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.162909 4902 scope.go:117] "RemoveContainer" containerID="9118bec5924d18ddd618a8750d04dac5bfd45c7ae04f2acab924299ac7122ce9" Jan 21 15:26:18 crc kubenswrapper[4902]: I0121 15:26:18.163444 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:18 crc kubenswrapper[4902]: E0121 15:26:18.163723 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:31 crc kubenswrapper[4902]: I0121 15:26:31.295140 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:31 crc kubenswrapper[4902]: E0121 15:26:31.296027 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.255879 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:32 crc kubenswrapper[4902]: E0121 15:26:32.256192 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256208 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" Jan 21 15:26:32 crc kubenswrapper[4902]: E0121 15:26:32.256227 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-utilities" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256235 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-utilities" Jan 21 15:26:32 crc kubenswrapper[4902]: E0121 15:26:32.256279 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-content" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256286 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="extract-content" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.256458 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5224a84-9644-4cdc-b3c4-eed2488ae61d" containerName="registry-server" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.257575 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.279290 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.354414 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-catalog-content\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.354490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-utilities\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.354567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqrk\" (UniqueName: \"kubernetes.io/projected/2ec2690b-73b2-45db-b14b-355b80ab92a6-kube-api-access-6cqrk\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456226 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-catalog-content\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-utilities\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456339 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqrk\" (UniqueName: \"kubernetes.io/projected/2ec2690b-73b2-45db-b14b-355b80ab92a6-kube-api-access-6cqrk\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-catalog-content\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.456854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2690b-73b2-45db-b14b-355b80ab92a6-utilities\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.476836 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqrk\" (UniqueName: \"kubernetes.io/projected/2ec2690b-73b2-45db-b14b-355b80ab92a6-kube-api-access-6cqrk\") pod \"certified-operators-mklsf\" (UID: \"2ec2690b-73b2-45db-b14b-355b80ab92a6\") " pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:32 crc kubenswrapper[4902]: I0121 15:26:32.590178 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:33 crc kubenswrapper[4902]: I0121 15:26:33.061884 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:33 crc kubenswrapper[4902]: I0121 15:26:33.284094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerStarted","Data":"b7497e30666457211bc1dbff0d19c6d29f8267a666765fa8bec62fbde6239e21"} Jan 21 15:26:33 crc kubenswrapper[4902]: I0121 15:26:33.284182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerStarted","Data":"1427ab1882dfac4469780ae6a2ba2c61e1cf315860af91b43a52ef914530182e"} Jan 21 15:26:34 crc kubenswrapper[4902]: I0121 15:26:34.299571 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ec2690b-73b2-45db-b14b-355b80ab92a6" containerID="b7497e30666457211bc1dbff0d19c6d29f8267a666765fa8bec62fbde6239e21" exitCode=0 Jan 21 15:26:34 crc kubenswrapper[4902]: I0121 15:26:34.302678 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:26:34 crc kubenswrapper[4902]: I0121 15:26:34.306523 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerDied","Data":"b7497e30666457211bc1dbff0d19c6d29f8267a666765fa8bec62fbde6239e21"} Jan 21 15:26:38 crc kubenswrapper[4902]: I0121 15:26:38.335536 4902 generic.go:334] "Generic (PLEG): container finished" podID="2ec2690b-73b2-45db-b14b-355b80ab92a6" containerID="aff16eb520fba6a9e2277db12e779189239f388a24f571354f5779dc3e7d15e7" exitCode=0 Jan 21 15:26:38 crc kubenswrapper[4902]: I0121 15:26:38.335653 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerDied","Data":"aff16eb520fba6a9e2277db12e779189239f388a24f571354f5779dc3e7d15e7"} Jan 21 15:26:40 crc kubenswrapper[4902]: I0121 15:26:40.352704 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mklsf" event={"ID":"2ec2690b-73b2-45db-b14b-355b80ab92a6","Type":"ContainerStarted","Data":"f70837e6519eb0b6b2c831e5486484d916388f44857bbc9bc3d77a0eeea931f8"} Jan 21 15:26:40 crc kubenswrapper[4902]: I0121 15:26:40.380557 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mklsf" podStartSLOduration=3.382409175 podStartE2EDuration="8.38053745s" podCreationTimestamp="2026-01-21 15:26:32 +0000 UTC" firstStartedPulling="2026-01-21 15:26:34.302350074 +0000 UTC m=+3156.379183113" lastFinishedPulling="2026-01-21 15:26:39.300478359 +0000 UTC m=+3161.377311388" observedRunningTime="2026-01-21 15:26:40.374858052 +0000 UTC m=+3162.451691111" watchObservedRunningTime="2026-01-21 15:26:40.38053745 +0000 UTC m=+3162.457370479" Jan 21 15:26:42 crc kubenswrapper[4902]: I0121 15:26:42.591247 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:42 crc kubenswrapper[4902]: I0121 15:26:42.591679 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:42 crc kubenswrapper[4902]: I0121 15:26:42.633006 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:43 crc kubenswrapper[4902]: I0121 15:26:43.295313 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:43 crc kubenswrapper[4902]: E0121 15:26:43.295597 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.638605 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mklsf" Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.704358 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mklsf"] Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.748797 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:26:52 crc kubenswrapper[4902]: I0121 15:26:52.749056 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7vpk9" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" containerID="cri-o://4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" gracePeriod=2 Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.118109 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.282942 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") pod \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.283020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") pod \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.283890 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities" (OuterVolumeSpecName: "utilities") pod "8d2ff121-c8ec-43d3-b97d-e2f164b9f847" (UID: "8d2ff121-c8ec-43d3-b97d-e2f164b9f847"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.284110 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") pod \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\" (UID: \"8d2ff121-c8ec-43d3-b97d-e2f164b9f847\") " Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.284474 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.300785 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v" (OuterVolumeSpecName: "kube-api-access-clx2v") pod "8d2ff121-c8ec-43d3-b97d-e2f164b9f847" (UID: "8d2ff121-c8ec-43d3-b97d-e2f164b9f847"). InnerVolumeSpecName "kube-api-access-clx2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.340071 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d2ff121-c8ec-43d3-b97d-e2f164b9f847" (UID: "8d2ff121-c8ec-43d3-b97d-e2f164b9f847"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.385796 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.385837 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clx2v\" (UniqueName: \"kubernetes.io/projected/8d2ff121-c8ec-43d3-b97d-e2f164b9f847-kube-api-access-clx2v\") on node \"crc\" DevicePath \"\"" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466863 4902 generic.go:334] "Generic (PLEG): container finished" podID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" exitCode=0 Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec"} Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466991 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vpk9" event={"ID":"8d2ff121-c8ec-43d3-b97d-e2f164b9f847","Type":"ContainerDied","Data":"e875616804386b93d0ffc56d15792663f14f3e2f21397c783ad065bf8edceedc"} Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.467014 4902 scope.go:117] "RemoveContainer" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.466958 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vpk9" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.498871 4902 scope.go:117] "RemoveContainer" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.514674 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.521250 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7vpk9"] Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.525397 4902 scope.go:117] "RemoveContainer" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.543277 4902 scope.go:117] "RemoveContainer" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" Jan 21 15:26:53 crc kubenswrapper[4902]: E0121 15:26:53.543733 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec\": container with ID starting with 4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec not found: ID does not exist" containerID="4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.543775 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec"} err="failed to get container status \"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec\": rpc error: code = NotFound desc = could not find container \"4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec\": container with ID starting with 4e54455c151dbbd8212d546a8d1d218db0247b826627ae3bd62e82cfb3a0a4ec not found: ID does not exist" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.543800 4902 scope.go:117] "RemoveContainer" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" Jan 21 15:26:53 crc kubenswrapper[4902]: E0121 15:26:53.544092 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0\": container with ID starting with e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0 not found: ID does not exist" containerID="e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.544119 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0"} err="failed to get container status \"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0\": rpc error: code = NotFound desc = could not find container \"e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0\": container with ID starting with e288745dbac7373e7837c167d172aa7a653c275a9298b12908e850485a6ca4a0 not found: ID does not exist" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.544136 4902 scope.go:117] "RemoveContainer" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" Jan 21 15:26:53 crc kubenswrapper[4902]: E0121 15:26:53.544370 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e\": container with ID starting with 13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e not found: ID does not exist" containerID="13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e" Jan 21 15:26:53 crc kubenswrapper[4902]: I0121 15:26:53.544400 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e"} err="failed to get container status \"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e\": rpc error: code = NotFound desc = could not find container \"13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e\": container with ID starting with 13b55829252b21c553e64dd12c86180c72698c6ac8e11d0e116e07a9e6aace7e not found: ID does not exist" Jan 21 15:26:54 crc kubenswrapper[4902]: I0121 15:26:54.302448 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" path="/var/lib/kubelet/pods/8d2ff121-c8ec-43d3-b97d-e2f164b9f847/volumes" Jan 21 15:26:58 crc kubenswrapper[4902]: I0121 15:26:58.303081 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:26:58 crc kubenswrapper[4902]: E0121 15:26:58.303725 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:12 crc kubenswrapper[4902]: I0121 15:27:12.300171 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:12 crc kubenswrapper[4902]: E0121 15:27:12.301232 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:24 crc kubenswrapper[4902]: I0121 15:27:24.296277 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:24 crc kubenswrapper[4902]: E0121 15:27:24.297412 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:36 crc kubenswrapper[4902]: I0121 15:27:36.295494 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:36 crc kubenswrapper[4902]: E0121 15:27:36.296639 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:27:50 crc kubenswrapper[4902]: I0121 15:27:50.295295 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:27:50 crc kubenswrapper[4902]: E0121 15:27:50.296890 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:03 crc kubenswrapper[4902]: I0121 15:28:03.295185 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:03 crc kubenswrapper[4902]: E0121 15:28:03.296017 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:15 crc kubenswrapper[4902]: I0121 15:28:15.294687 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:15 crc kubenswrapper[4902]: E0121 15:28:15.295470 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:26 crc kubenswrapper[4902]: I0121 15:28:26.295190 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:26 crc kubenswrapper[4902]: E0121 15:28:26.296468 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:41 crc kubenswrapper[4902]: I0121 15:28:41.294785 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:41 crc kubenswrapper[4902]: E0121 15:28:41.295655 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:28:54 crc kubenswrapper[4902]: I0121 15:28:54.295916 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:28:54 crc kubenswrapper[4902]: E0121 15:28:54.297122 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:07 crc kubenswrapper[4902]: I0121 15:29:07.294480 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:07 crc kubenswrapper[4902]: E0121 15:29:07.295174 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:22 crc kubenswrapper[4902]: I0121 15:29:22.295513 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:22 crc kubenswrapper[4902]: E0121 15:29:22.296305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:34 crc kubenswrapper[4902]: I0121 15:29:34.295201 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:34 crc kubenswrapper[4902]: E0121 15:29:34.296426 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:46 crc kubenswrapper[4902]: I0121 15:29:46.296542 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:46 crc kubenswrapper[4902]: E0121 15:29:46.297533 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:29:57 crc kubenswrapper[4902]: I0121 15:29:57.294390 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:29:57 crc kubenswrapper[4902]: E0121 15:29:57.295255 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.158885 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 15:30:00 crc kubenswrapper[4902]: E0121 15:30:00.159702 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159715 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4902]: E0121 15:30:00.159726 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-utilities" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159733 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-utilities" Jan 21 15:30:00 crc kubenswrapper[4902]: E0121 15:30:00.159743 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-content" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159749 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="extract-content" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.159894 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2ff121-c8ec-43d3-b97d-e2f164b9f847" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.160374 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.162823 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.163705 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.173989 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.346852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.346905 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.346978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.448942 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.449008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.449075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.451435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.460005 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.484776 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"collect-profiles-29483490-b6ktg\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:00 crc kubenswrapper[4902]: I0121 15:30:00.777566 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.241818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.978265 4902 generic.go:334] "Generic (PLEG): container finished" podID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerID="2d74f71a998726973b118e0b0755aa5903f2b68cb19dc4c893a565df10186a56" exitCode=0 Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.978336 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" event={"ID":"e93c6a82-9651-4ed2-a941-9414d9aff62c","Type":"ContainerDied","Data":"2d74f71a998726973b118e0b0755aa5903f2b68cb19dc4c893a565df10186a56"} Jan 21 15:30:01 crc kubenswrapper[4902]: I0121 15:30:01.978678 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" event={"ID":"e93c6a82-9651-4ed2-a941-9414d9aff62c","Type":"ContainerStarted","Data":"e6b6e42c855295ba91f6834b95903c938c31c49afcc92b34579974a80c3b5cbc"} Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.328667 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.492512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") pod \"e93c6a82-9651-4ed2-a941-9414d9aff62c\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.493262 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") pod \"e93c6a82-9651-4ed2-a941-9414d9aff62c\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.493334 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") pod \"e93c6a82-9651-4ed2-a941-9414d9aff62c\" (UID: \"e93c6a82-9651-4ed2-a941-9414d9aff62c\") " Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.493959 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e93c6a82-9651-4ed2-a941-9414d9aff62c" (UID: "e93c6a82-9651-4ed2-a941-9414d9aff62c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.497815 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e93c6a82-9651-4ed2-a941-9414d9aff62c" (UID: "e93c6a82-9651-4ed2-a941-9414d9aff62c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.498794 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt" (OuterVolumeSpecName: "kube-api-access-c55xt") pod "e93c6a82-9651-4ed2-a941-9414d9aff62c" (UID: "e93c6a82-9651-4ed2-a941-9414d9aff62c"). InnerVolumeSpecName "kube-api-access-c55xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.595009 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e93c6a82-9651-4ed2-a941-9414d9aff62c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.595060 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c55xt\" (UniqueName: \"kubernetes.io/projected/e93c6a82-9651-4ed2-a941-9414d9aff62c-kube-api-access-c55xt\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.595069 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e93c6a82-9651-4ed2-a941-9414d9aff62c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.997593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" event={"ID":"e93c6a82-9651-4ed2-a941-9414d9aff62c","Type":"ContainerDied","Data":"e6b6e42c855295ba91f6834b95903c938c31c49afcc92b34579974a80c3b5cbc"} Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.998799 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b6e42c855295ba91f6834b95903c938c31c49afcc92b34579974a80c3b5cbc" Jan 21 15:30:03 crc kubenswrapper[4902]: I0121 15:30:03.997651 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg" Jan 21 15:30:04 crc kubenswrapper[4902]: I0121 15:30:04.404186 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 15:30:04 crc kubenswrapper[4902]: I0121 15:30:04.413398 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-2whx2"] Jan 21 15:30:06 crc kubenswrapper[4902]: I0121 15:30:06.303373 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbc78bb-1faf-4da9-ab79-cee1540bb647" path="/var/lib/kubelet/pods/0fbc78bb-1faf-4da9-ab79-cee1540bb647/volumes" Jan 21 15:30:08 crc kubenswrapper[4902]: I0121 15:30:08.303212 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:08 crc kubenswrapper[4902]: E0121 15:30:08.304118 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:19 crc kubenswrapper[4902]: I0121 15:30:19.295516 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:19 crc kubenswrapper[4902]: E0121 15:30:19.296213 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.670609 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:20 crc kubenswrapper[4902]: E0121 15:30:20.671260 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerName="collect-profiles" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.671277 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerName="collect-profiles" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.671443 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" containerName="collect-profiles" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.672802 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.682738 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.847158 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.847245 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.847285 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.948777 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.948863 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.949004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.949534 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.949718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.978359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"redhat-marketplace-kkdg8\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:20 crc kubenswrapper[4902]: I0121 15:30:20.994292 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:21 crc kubenswrapper[4902]: I0121 15:30:21.428627 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.090823 4902 scope.go:117] "RemoveContainer" containerID="fa1156cf23ef6713ff3d92ca234f6e5140ae3f940464e50453ee6dd138fecf3b" Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.122247 4902 generic.go:334] "Generic (PLEG): container finished" podID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" exitCode=0 Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.122301 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805"} Jan 21 15:30:22 crc kubenswrapper[4902]: I0121 15:30:22.122337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerStarted","Data":"63135e61a2dc6c716def5056ebb4d08cd182f00371ec69399c421fbb8857c147"} Jan 21 15:30:24 crc kubenswrapper[4902]: I0121 15:30:24.143401 4902 generic.go:334] "Generic (PLEG): container finished" podID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" exitCode=0 Jan 21 15:30:24 crc kubenswrapper[4902]: I0121 15:30:24.143821 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1"} Jan 21 15:30:25 crc kubenswrapper[4902]: I0121 15:30:25.157672 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerStarted","Data":"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f"} Jan 21 15:30:25 crc kubenswrapper[4902]: I0121 15:30:25.182335 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkdg8" podStartSLOduration=2.648859559 podStartE2EDuration="5.182311517s" podCreationTimestamp="2026-01-21 15:30:20 +0000 UTC" firstStartedPulling="2026-01-21 15:30:22.125089294 +0000 UTC m=+3384.201922373" lastFinishedPulling="2026-01-21 15:30:24.658541302 +0000 UTC m=+3386.735374331" observedRunningTime="2026-01-21 15:30:25.179956711 +0000 UTC m=+3387.256789780" watchObservedRunningTime="2026-01-21 15:30:25.182311517 +0000 UTC m=+3387.259144586" Jan 21 15:30:30 crc kubenswrapper[4902]: I0121 15:30:30.995247 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:30 crc kubenswrapper[4902]: I0121 15:30:30.995620 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.053078 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.247170 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.295684 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:31 crc kubenswrapper[4902]: E0121 15:30:31.296105 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:31 crc kubenswrapper[4902]: I0121 15:30:31.302314 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:33 crc kubenswrapper[4902]: I0121 15:30:33.222349 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkdg8" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" containerID="cri-o://e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" gracePeriod=2 Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.231557 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233760 4902 generic.go:334] "Generic (PLEG): container finished" podID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" exitCode=0 Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233816 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f"} Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkdg8" event={"ID":"39a9fe1b-7335-4734-976f-9fdb787938c0","Type":"ContainerDied","Data":"63135e61a2dc6c716def5056ebb4d08cd182f00371ec69399c421fbb8857c147"} Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.233872 4902 scope.go:117] "RemoveContainer" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.259010 4902 scope.go:117] "RemoveContainer" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.269753 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") pod \"39a9fe1b-7335-4734-976f-9fdb787938c0\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.270013 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") pod \"39a9fe1b-7335-4734-976f-9fdb787938c0\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.270209 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") pod \"39a9fe1b-7335-4734-976f-9fdb787938c0\" (UID: \"39a9fe1b-7335-4734-976f-9fdb787938c0\") " Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.271029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities" (OuterVolumeSpecName: "utilities") pod "39a9fe1b-7335-4734-976f-9fdb787938c0" (UID: "39a9fe1b-7335-4734-976f-9fdb787938c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.276469 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7" (OuterVolumeSpecName: "kube-api-access-rvfq7") pod "39a9fe1b-7335-4734-976f-9fdb787938c0" (UID: "39a9fe1b-7335-4734-976f-9fdb787938c0"). InnerVolumeSpecName "kube-api-access-rvfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.294426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39a9fe1b-7335-4734-976f-9fdb787938c0" (UID: "39a9fe1b-7335-4734-976f-9fdb787938c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.297648 4902 scope.go:117] "RemoveContainer" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.333099 4902 scope.go:117] "RemoveContainer" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" Jan 21 15:30:34 crc kubenswrapper[4902]: E0121 15:30:34.334247 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f\": container with ID starting with e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f not found: ID does not exist" containerID="e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.334302 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f"} err="failed to get container status \"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f\": rpc error: code = NotFound desc = could not find container \"e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f\": container with ID starting with e366d7213bb6c50909175ffd7494f76761e2a76a4f6aec395cd3017195a7540f not found: ID does not exist" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.334334 4902 scope.go:117] "RemoveContainer" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" Jan 21 15:30:34 crc kubenswrapper[4902]: E0121 15:30:34.335127 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1\": container with ID starting with 3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1 not found: ID does not exist" containerID="3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.335205 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1"} err="failed to get container status \"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1\": rpc error: code = NotFound desc = could not find container \"3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1\": container with ID starting with 3a27944c82a731bdb5767321407fad25136bfc04cc67971318677dd787f80ce1 not found: ID does not exist" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.335258 4902 scope.go:117] "RemoveContainer" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" Jan 21 15:30:34 crc kubenswrapper[4902]: E0121 15:30:34.335725 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805\": container with ID starting with 2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805 not found: ID does not exist" containerID="2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.335771 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805"} err="failed to get container status \"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805\": rpc error: code = NotFound desc = could not find container \"2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805\": container with ID starting with 2480d82f65de82746c1eb70d59f1b79cec2023b767cfd5a115d53dcbda4e6805 not found: ID does not exist" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.372003 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.372095 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39a9fe1b-7335-4734-976f-9fdb787938c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:34 crc kubenswrapper[4902]: I0121 15:30:34.372111 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvfq7\" (UniqueName: \"kubernetes.io/projected/39a9fe1b-7335-4734-976f-9fdb787938c0-kube-api-access-rvfq7\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:35 crc kubenswrapper[4902]: I0121 15:30:35.247903 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkdg8" Jan 21 15:30:35 crc kubenswrapper[4902]: I0121 15:30:35.269025 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:35 crc kubenswrapper[4902]: I0121 15:30:35.275516 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkdg8"] Jan 21 15:30:36 crc kubenswrapper[4902]: I0121 15:30:36.313358 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" path="/var/lib/kubelet/pods/39a9fe1b-7335-4734-976f-9fdb787938c0/volumes" Jan 21 15:30:42 crc kubenswrapper[4902]: I0121 15:30:42.295452 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:42 crc kubenswrapper[4902]: E0121 15:30:42.296306 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:30:55 crc kubenswrapper[4902]: I0121 15:30:55.295855 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:30:55 crc kubenswrapper[4902]: E0121 15:30:55.297130 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:31:09 crc kubenswrapper[4902]: I0121 15:31:09.295396 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:31:09 crc kubenswrapper[4902]: E0121 15:31:09.296678 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:31:22 crc kubenswrapper[4902]: I0121 15:31:22.295342 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:31:22 crc kubenswrapper[4902]: I0121 15:31:22.611995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed"} Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.122595 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:31 crc kubenswrapper[4902]: E0121 15:31:31.123303 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-content" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123315 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-content" Jan 21 15:31:31 crc kubenswrapper[4902]: E0121 15:31:31.123335 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123341 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" Jan 21 15:31:31 crc kubenswrapper[4902]: E0121 15:31:31.123358 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-utilities" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123364 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="extract-utilities" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.123492 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a9fe1b-7335-4734-976f-9fdb787938c0" containerName="registry-server" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.124418 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.154936 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.249350 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.249614 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.249738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.350638 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.350710 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.350760 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.351331 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.351476 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.374484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"community-operators-7768z\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.462749 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:31 crc kubenswrapper[4902]: I0121 15:31:31.787749 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:32 crc kubenswrapper[4902]: I0121 15:31:32.688755 4902 generic.go:334] "Generic (PLEG): container finished" podID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" exitCode=0 Jan 21 15:31:32 crc kubenswrapper[4902]: I0121 15:31:32.688800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1"} Jan 21 15:31:32 crc kubenswrapper[4902]: I0121 15:31:32.689026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerStarted","Data":"a4fde28708bde8ddfb8d4b7f02f0bb7bca9b9fbbc2803ec40f958a5fa6144701"} Jan 21 15:31:33 crc kubenswrapper[4902]: I0121 15:31:33.708525 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerStarted","Data":"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb"} Jan 21 15:31:34 crc kubenswrapper[4902]: I0121 15:31:34.719293 4902 generic.go:334] "Generic (PLEG): container finished" podID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" exitCode=0 Jan 21 15:31:34 crc kubenswrapper[4902]: I0121 15:31:34.719367 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb"} Jan 21 15:31:34 crc kubenswrapper[4902]: I0121 15:31:34.722008 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:31:35 crc kubenswrapper[4902]: I0121 15:31:35.728380 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerStarted","Data":"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711"} Jan 21 15:31:35 crc kubenswrapper[4902]: I0121 15:31:35.759899 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7768z" podStartSLOduration=2.320205848 podStartE2EDuration="4.759881067s" podCreationTimestamp="2026-01-21 15:31:31 +0000 UTC" firstStartedPulling="2026-01-21 15:31:32.690721477 +0000 UTC m=+3454.767554506" lastFinishedPulling="2026-01-21 15:31:35.130396696 +0000 UTC m=+3457.207229725" observedRunningTime="2026-01-21 15:31:35.750347602 +0000 UTC m=+3457.827180641" watchObservedRunningTime="2026-01-21 15:31:35.759881067 +0000 UTC m=+3457.836714106" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.463325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.465215 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.519286 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.840035 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:41 crc kubenswrapper[4902]: I0121 15:31:41.886829 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:43 crc kubenswrapper[4902]: I0121 15:31:43.798838 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7768z" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" containerID="cri-o://72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" gracePeriod=2 Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.246666 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.356020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") pod \"43b01214-a7cb-4f07-a4a2-9ca629e85474\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.356211 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") pod \"43b01214-a7cb-4f07-a4a2-9ca629e85474\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.356266 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") pod \"43b01214-a7cb-4f07-a4a2-9ca629e85474\" (UID: \"43b01214-a7cb-4f07-a4a2-9ca629e85474\") " Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.357199 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities" (OuterVolumeSpecName: "utilities") pod "43b01214-a7cb-4f07-a4a2-9ca629e85474" (UID: "43b01214-a7cb-4f07-a4a2-9ca629e85474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.362612 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8" (OuterVolumeSpecName: "kube-api-access-7vqv8") pod "43b01214-a7cb-4f07-a4a2-9ca629e85474" (UID: "43b01214-a7cb-4f07-a4a2-9ca629e85474"). InnerVolumeSpecName "kube-api-access-7vqv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.435751 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43b01214-a7cb-4f07-a4a2-9ca629e85474" (UID: "43b01214-a7cb-4f07-a4a2-9ca629e85474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.457842 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqv8\" (UniqueName: \"kubernetes.io/projected/43b01214-a7cb-4f07-a4a2-9ca629e85474-kube-api-access-7vqv8\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.457894 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.457910 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43b01214-a7cb-4f07-a4a2-9ca629e85474-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.808150 4902 generic.go:334] "Generic (PLEG): container finished" podID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" exitCode=0 Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.808219 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7768z" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.808246 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711"} Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.809253 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7768z" event={"ID":"43b01214-a7cb-4f07-a4a2-9ca629e85474","Type":"ContainerDied","Data":"a4fde28708bde8ddfb8d4b7f02f0bb7bca9b9fbbc2803ec40f958a5fa6144701"} Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.809291 4902 scope.go:117] "RemoveContainer" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.837134 4902 scope.go:117] "RemoveContainer" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.843660 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.850212 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7768z"] Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.872602 4902 scope.go:117] "RemoveContainer" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.889959 4902 scope.go:117] "RemoveContainer" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" Jan 21 15:31:44 crc kubenswrapper[4902]: E0121 15:31:44.890464 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711\": container with ID starting with 72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711 not found: ID does not exist" containerID="72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.890731 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711"} err="failed to get container status \"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711\": rpc error: code = NotFound desc = could not find container \"72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711\": container with ID starting with 72053c18336b002e109364bd0bbb58f79cfc1de978bfb7a1130af76b5175d711 not found: ID does not exist" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.891829 4902 scope.go:117] "RemoveContainer" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" Jan 21 15:31:44 crc kubenswrapper[4902]: E0121 15:31:44.893447 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb\": container with ID starting with 43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb not found: ID does not exist" containerID="43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.893481 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb"} err="failed to get container status \"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb\": rpc error: code = NotFound desc = could not find container \"43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb\": container with ID starting with 43fb7b60c9bfe69a5145058697e9ec46a3c77589d6e33a3d137eebf90e452afb not found: ID does not exist" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.893505 4902 scope.go:117] "RemoveContainer" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" Jan 21 15:31:44 crc kubenswrapper[4902]: E0121 15:31:44.895122 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1\": container with ID starting with 3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1 not found: ID does not exist" containerID="3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1" Jan 21 15:31:44 crc kubenswrapper[4902]: I0121 15:31:44.895228 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1"} err="failed to get container status \"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1\": rpc error: code = NotFound desc = could not find container \"3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1\": container with ID starting with 3f2435aa7f0686465fbe40e8d5408e82ef993d8b03bf0cb6bcbabea4092373d1 not found: ID does not exist" Jan 21 15:31:46 crc kubenswrapper[4902]: I0121 15:31:46.308445 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" path="/var/lib/kubelet/pods/43b01214-a7cb-4f07-a4a2-9ca629e85474/volumes" Jan 21 15:33:47 crc kubenswrapper[4902]: I0121 15:33:47.770565 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:33:47 crc kubenswrapper[4902]: I0121 15:33:47.771394 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:17 crc kubenswrapper[4902]: I0121 15:34:17.769446 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:34:17 crc kubenswrapper[4902]: I0121 15:34:17.770008 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.770606 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.772689 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.772841 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.773691 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:34:47 crc kubenswrapper[4902]: I0121 15:34:47.775159 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed" gracePeriod=600 Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289573 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed" exitCode=0 Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289636 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed"} Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289907 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321"} Jan 21 15:34:48 crc kubenswrapper[4902]: I0121 15:34:48.289930 4902 scope.go:117] "RemoveContainer" containerID="e7fed4c86edb96bd9bf0ea4c53c6cfd338644d0c413115d7d6275ecb6aa0985e" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.888960 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:27 crc kubenswrapper[4902]: E0121 15:35:27.889821 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.889839 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" Jan 21 15:35:27 crc kubenswrapper[4902]: E0121 15:35:27.889859 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-content" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.889867 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-content" Jan 21 15:35:27 crc kubenswrapper[4902]: E0121 15:35:27.889884 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-utilities" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.889892 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="extract-utilities" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.890094 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b01214-a7cb-4f07-a4a2-9ca629e85474" containerName="registry-server" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.891285 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:27 crc kubenswrapper[4902]: I0121 15:35:27.901831 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.003227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.003293 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.003354 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104141 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.104984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.125166 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"redhat-operators-rrdhp\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.207438 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:28 crc kubenswrapper[4902]: I0121 15:35:28.673077 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:29 crc kubenswrapper[4902]: I0121 15:35:29.641244 4902 generic.go:334] "Generic (PLEG): container finished" podID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerID="828f9daf1127ffe2366fc257cd7eb968158b025f64de8f1eeda00f7bff957c80" exitCode=0 Jan 21 15:35:29 crc kubenswrapper[4902]: I0121 15:35:29.641310 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"828f9daf1127ffe2366fc257cd7eb968158b025f64de8f1eeda00f7bff957c80"} Jan 21 15:35:29 crc kubenswrapper[4902]: I0121 15:35:29.641647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerStarted","Data":"ce44022cdb20f101ec872cfce74fcd3ac6eaa437f7d48c42d63229c91b61642c"} Jan 21 15:35:31 crc kubenswrapper[4902]: I0121 15:35:31.665036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerStarted","Data":"686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851"} Jan 21 15:35:32 crc kubenswrapper[4902]: I0121 15:35:32.674851 4902 generic.go:334] "Generic (PLEG): container finished" podID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerID="686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851" exitCode=0 Jan 21 15:35:32 crc kubenswrapper[4902]: I0121 15:35:32.674910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851"} Jan 21 15:35:33 crc kubenswrapper[4902]: I0121 15:35:33.686467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerStarted","Data":"24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e"} Jan 21 15:35:38 crc kubenswrapper[4902]: I0121 15:35:38.208169 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:38 crc kubenswrapper[4902]: I0121 15:35:38.208635 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:39 crc kubenswrapper[4902]: I0121 15:35:39.269196 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rrdhp" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" probeResult="failure" output=< Jan 21 15:35:39 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 15:35:39 crc kubenswrapper[4902]: > Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.285227 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.324670 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rrdhp" podStartSLOduration=17.65265842 podStartE2EDuration="21.324644721s" podCreationTimestamp="2026-01-21 15:35:27 +0000 UTC" firstStartedPulling="2026-01-21 15:35:29.643145484 +0000 UTC m=+3691.719978523" lastFinishedPulling="2026-01-21 15:35:33.315131795 +0000 UTC m=+3695.391964824" observedRunningTime="2026-01-21 15:35:33.712524587 +0000 UTC m=+3695.789357666" watchObservedRunningTime="2026-01-21 15:35:48.324644721 +0000 UTC m=+3710.401477780" Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.370849 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:48 crc kubenswrapper[4902]: I0121 15:35:48.536714 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:49 crc kubenswrapper[4902]: I0121 15:35:49.826334 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rrdhp" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" containerID="cri-o://24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e" gracePeriod=2 Jan 21 15:35:50 crc kubenswrapper[4902]: I0121 15:35:50.840852 4902 generic.go:334] "Generic (PLEG): container finished" podID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerID="24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e" exitCode=0 Jan 21 15:35:50 crc kubenswrapper[4902]: I0121 15:35:50.840955 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e"} Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.343603 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.377435 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") pod \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.377574 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") pod \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.377605 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") pod \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\" (UID: \"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f\") " Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.379371 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities" (OuterVolumeSpecName: "utilities") pod "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" (UID: "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.385854 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh" (OuterVolumeSpecName: "kube-api-access-42wkh") pod "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" (UID: "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f"). InnerVolumeSpecName "kube-api-access-42wkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.478521 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42wkh\" (UniqueName: \"kubernetes.io/projected/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-kube-api-access-42wkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.478556 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.534457 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" (UID: "a48d7c61-552f-4d48-9b2e-cd7d1099fb3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.579572 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.861893 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rrdhp" event={"ID":"a48d7c61-552f-4d48-9b2e-cd7d1099fb3f","Type":"ContainerDied","Data":"ce44022cdb20f101ec872cfce74fcd3ac6eaa437f7d48c42d63229c91b61642c"} Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.861987 4902 scope.go:117] "RemoveContainer" containerID="24b813e80bc07f3368f385fb08ce6b8441d40a49c4b80989502abfe186c7178e" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.863337 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rrdhp" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.906208 4902 scope.go:117] "RemoveContainer" containerID="686b240c6ffdfc497e30c4c41658f30c7060e2e55f0a14e5732c7537e9030851" Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.927424 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.938108 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rrdhp"] Jan 21 15:35:51 crc kubenswrapper[4902]: I0121 15:35:51.952707 4902 scope.go:117] "RemoveContainer" containerID="828f9daf1127ffe2366fc257cd7eb968158b025f64de8f1eeda00f7bff957c80" Jan 21 15:35:52 crc kubenswrapper[4902]: I0121 15:35:52.313870 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" path="/var/lib/kubelet/pods/a48d7c61-552f-4d48-9b2e-cd7d1099fb3f/volumes" Jan 21 15:37:17 crc kubenswrapper[4902]: I0121 15:37:17.770374 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:37:17 crc kubenswrapper[4902]: I0121 15:37:17.771022 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:37:47 crc kubenswrapper[4902]: I0121 15:37:47.770217 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:37:47 crc kubenswrapper[4902]: I0121 15:37:47.772227 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.769394 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.769992 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.770059 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.770761 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:38:17 crc kubenswrapper[4902]: I0121 15:38:17.770824 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" gracePeriod=600 Jan 21 15:38:17 crc kubenswrapper[4902]: E0121 15:38:17.914283 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085143 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" exitCode=0 Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085196 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321"} Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085240 4902 scope.go:117] "RemoveContainer" containerID="d2b268969053a6288fc1ae2239677e73b8d0905d0f4f4bd5a3225c287ca914ed" Jan 21 15:38:18 crc kubenswrapper[4902]: I0121 15:38:18.085847 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:38:18 crc kubenswrapper[4902]: E0121 15:38:18.086281 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:38:32 crc kubenswrapper[4902]: I0121 15:38:32.294655 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:38:32 crc kubenswrapper[4902]: E0121 15:38:32.295468 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:38:47 crc kubenswrapper[4902]: I0121 15:38:47.297371 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:38:47 crc kubenswrapper[4902]: E0121 15:38:47.298508 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:02 crc kubenswrapper[4902]: I0121 15:39:02.295199 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:02 crc kubenswrapper[4902]: E0121 15:39:02.295901 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:16 crc kubenswrapper[4902]: I0121 15:39:16.294903 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:16 crc kubenswrapper[4902]: E0121 15:39:16.295773 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:28 crc kubenswrapper[4902]: I0121 15:39:28.301158 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:28 crc kubenswrapper[4902]: E0121 15:39:28.301815 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:40 crc kubenswrapper[4902]: I0121 15:39:40.295650 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:40 crc kubenswrapper[4902]: E0121 15:39:40.296907 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:39:54 crc kubenswrapper[4902]: I0121 15:39:54.294923 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:39:54 crc kubenswrapper[4902]: E0121 15:39:54.296012 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:07 crc kubenswrapper[4902]: I0121 15:40:07.295253 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:07 crc kubenswrapper[4902]: E0121 15:40:07.298285 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.811935 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:11 crc kubenswrapper[4902]: E0121 15:40:11.813559 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-utilities" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.813640 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-utilities" Jan 21 15:40:11 crc kubenswrapper[4902]: E0121 15:40:11.813692 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-content" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.813706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="extract-content" Jan 21 15:40:11 crc kubenswrapper[4902]: E0121 15:40:11.813731 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.813744 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.814101 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48d7c61-552f-4d48-9b2e-cd7d1099fb3f" containerName="registry-server" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.816213 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.823713 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.945180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.945307 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:11 crc kubenswrapper[4902]: I0121 15:40:11.945346 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.046895 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.046966 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.046991 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.047514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.047542 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.080726 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"certified-operators-vchwj\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.164791 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:12 crc kubenswrapper[4902]: I0121 15:40:12.618996 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.047296 4902 generic.go:334] "Generic (PLEG): container finished" podID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" exitCode=0 Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.047345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf"} Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.047607 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerStarted","Data":"a12d6fa48cd8508032045083a1ac196784061211f8ca33abf1e88badd2348a1a"} Jan 21 15:40:13 crc kubenswrapper[4902]: I0121 15:40:13.050503 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:40:14 crc kubenswrapper[4902]: I0121 15:40:14.058537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerStarted","Data":"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa"} Jan 21 15:40:15 crc kubenswrapper[4902]: I0121 15:40:15.071372 4902 generic.go:334] "Generic (PLEG): container finished" podID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" exitCode=0 Jan 21 15:40:15 crc kubenswrapper[4902]: I0121 15:40:15.071448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa"} Jan 21 15:40:16 crc kubenswrapper[4902]: I0121 15:40:16.082251 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerStarted","Data":"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb"} Jan 21 15:40:16 crc kubenswrapper[4902]: I0121 15:40:16.114240 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vchwj" podStartSLOduration=2.678047382 podStartE2EDuration="5.11422234s" podCreationTimestamp="2026-01-21 15:40:11 +0000 UTC" firstStartedPulling="2026-01-21 15:40:13.049998252 +0000 UTC m=+3975.126831281" lastFinishedPulling="2026-01-21 15:40:15.48617321 +0000 UTC m=+3977.563006239" observedRunningTime="2026-01-21 15:40:16.110189977 +0000 UTC m=+3978.187023016" watchObservedRunningTime="2026-01-21 15:40:16.11422234 +0000 UTC m=+3978.191055379" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.177447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.178272 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.240338 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:22 crc kubenswrapper[4902]: I0121 15:40:22.295203 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:22 crc kubenswrapper[4902]: E0121 15:40:22.295523 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:23 crc kubenswrapper[4902]: I0121 15:40:23.211597 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:23 crc kubenswrapper[4902]: I0121 15:40:23.282390 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.162145 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vchwj" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" containerID="cri-o://28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" gracePeriod=2 Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.644433 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.806381 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") pod \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.806497 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") pod \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.806570 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") pod \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\" (UID: \"ec10f1dd-7bfa-4767-921e-d67dc1b461c7\") " Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.807691 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities" (OuterVolumeSpecName: "utilities") pod "ec10f1dd-7bfa-4767-921e-d67dc1b461c7" (UID: "ec10f1dd-7bfa-4767-921e-d67dc1b461c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.813689 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6" (OuterVolumeSpecName: "kube-api-access-5q9f6") pod "ec10f1dd-7bfa-4767-921e-d67dc1b461c7" (UID: "ec10f1dd-7bfa-4767-921e-d67dc1b461c7"). InnerVolumeSpecName "kube-api-access-5q9f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.851559 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec10f1dd-7bfa-4767-921e-d67dc1b461c7" (UID: "ec10f1dd-7bfa-4767-921e-d67dc1b461c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.908035 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.908094 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:40:25 crc kubenswrapper[4902]: I0121 15:40:25.908111 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q9f6\" (UniqueName: \"kubernetes.io/projected/ec10f1dd-7bfa-4767-921e-d67dc1b461c7-kube-api-access-5q9f6\") on node \"crc\" DevicePath \"\"" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.171874 4902 generic.go:334] "Generic (PLEG): container finished" podID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" exitCode=0 Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.171981 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vchwj" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.171979 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb"} Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.173421 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vchwj" event={"ID":"ec10f1dd-7bfa-4767-921e-d67dc1b461c7","Type":"ContainerDied","Data":"a12d6fa48cd8508032045083a1ac196784061211f8ca33abf1e88badd2348a1a"} Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.173460 4902 scope.go:117] "RemoveContainer" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.201489 4902 scope.go:117] "RemoveContainer" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.224212 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.233759 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vchwj"] Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.235696 4902 scope.go:117] "RemoveContainer" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.269241 4902 scope.go:117] "RemoveContainer" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" Jan 21 15:40:26 crc kubenswrapper[4902]: E0121 15:40:26.269992 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb\": container with ID starting with 28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb not found: ID does not exist" containerID="28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270066 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb"} err="failed to get container status \"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb\": rpc error: code = NotFound desc = could not find container \"28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb\": container with ID starting with 28f4b173aafe49c2f248e23b475cc44b5426874b6a070f1808b7363cdd50a5bb not found: ID does not exist" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270112 4902 scope.go:117] "RemoveContainer" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" Jan 21 15:40:26 crc kubenswrapper[4902]: E0121 15:40:26.270587 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa\": container with ID starting with 0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa not found: ID does not exist" containerID="0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270639 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa"} err="failed to get container status \"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa\": rpc error: code = NotFound desc = could not find container \"0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa\": container with ID starting with 0775d9102892fef44c9259c87571f1df473896bb9b2a29499794140a31de1efa not found: ID does not exist" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.270673 4902 scope.go:117] "RemoveContainer" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" Jan 21 15:40:26 crc kubenswrapper[4902]: E0121 15:40:26.271011 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf\": container with ID starting with ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf not found: ID does not exist" containerID="ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.271065 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf"} err="failed to get container status \"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf\": rpc error: code = NotFound desc = could not find container \"ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf\": container with ID starting with ce25437daa0a10656e6fb4ab8dde255e9d713d68d2732ca1435d43e8b4409daf not found: ID does not exist" Jan 21 15:40:26 crc kubenswrapper[4902]: I0121 15:40:26.309776 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" path="/var/lib/kubelet/pods/ec10f1dd-7bfa-4767-921e-d67dc1b461c7/volumes" Jan 21 15:40:36 crc kubenswrapper[4902]: I0121 15:40:36.295576 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:36 crc kubenswrapper[4902]: E0121 15:40:36.298574 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:40:49 crc kubenswrapper[4902]: I0121 15:40:49.295210 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:40:49 crc kubenswrapper[4902]: E0121 15:40:49.296224 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:02 crc kubenswrapper[4902]: I0121 15:41:02.295209 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:02 crc kubenswrapper[4902]: E0121 15:41:02.295914 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:13 crc kubenswrapper[4902]: I0121 15:41:13.294728 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:13 crc kubenswrapper[4902]: E0121 15:41:13.295787 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:24 crc kubenswrapper[4902]: I0121 15:41:24.295177 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:24 crc kubenswrapper[4902]: E0121 15:41:24.296202 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:39 crc kubenswrapper[4902]: I0121 15:41:39.295613 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:39 crc kubenswrapper[4902]: E0121 15:41:39.296669 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.280174 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:46 crc kubenswrapper[4902]: E0121 15:41:46.281205 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-content" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-content" Jan 21 15:41:46 crc kubenswrapper[4902]: E0121 15:41:46.281251 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-utilities" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281261 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="extract-utilities" Jan 21 15:41:46 crc kubenswrapper[4902]: E0121 15:41:46.281276 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281284 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.281459 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec10f1dd-7bfa-4767-921e-d67dc1b461c7" containerName="registry-server" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.283174 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.317036 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.471395 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.471470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.471605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.572781 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.572876 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.572938 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.573582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.573687 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.600655 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"redhat-marketplace-wvsn8\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:46 crc kubenswrapper[4902]: I0121 15:41:46.610392 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.051716 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.924258 4902 generic.go:334] "Generic (PLEG): container finished" podID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" exitCode=0 Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.924372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297"} Jan 21 15:41:47 crc kubenswrapper[4902]: I0121 15:41:47.924536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerStarted","Data":"08bfe779f035ed88017c1165646e93b26c0f9c40cf978a2efa9fac64b28aafd0"} Jan 21 15:41:48 crc kubenswrapper[4902]: I0121 15:41:48.936931 4902 generic.go:334] "Generic (PLEG): container finished" podID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" exitCode=0 Jan 21 15:41:48 crc kubenswrapper[4902]: I0121 15:41:48.937086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5"} Jan 21 15:41:49 crc kubenswrapper[4902]: I0121 15:41:49.949306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerStarted","Data":"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105"} Jan 21 15:41:49 crc kubenswrapper[4902]: I0121 15:41:49.993302 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wvsn8" podStartSLOduration=2.580537099 podStartE2EDuration="3.993219744s" podCreationTimestamp="2026-01-21 15:41:46 +0000 UTC" firstStartedPulling="2026-01-21 15:41:47.926942248 +0000 UTC m=+4070.003775297" lastFinishedPulling="2026-01-21 15:41:49.339624883 +0000 UTC m=+4071.416457942" observedRunningTime="2026-01-21 15:41:49.97587694 +0000 UTC m=+4072.052710009" watchObservedRunningTime="2026-01-21 15:41:49.993219744 +0000 UTC m=+4072.070052823" Jan 21 15:41:52 crc kubenswrapper[4902]: I0121 15:41:52.295412 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:41:52 crc kubenswrapper[4902]: E0121 15:41:52.296106 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:41:56 crc kubenswrapper[4902]: I0121 15:41:56.611505 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:56 crc kubenswrapper[4902]: I0121 15:41:56.611998 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:56 crc kubenswrapper[4902]: I0121 15:41:56.686217 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:57 crc kubenswrapper[4902]: I0121 15:41:57.081221 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:41:57 crc kubenswrapper[4902]: I0121 15:41:57.142129 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:41:59 crc kubenswrapper[4902]: I0121 15:41:59.019024 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wvsn8" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" containerID="cri-o://86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" gracePeriod=2 Jan 21 15:41:59 crc kubenswrapper[4902]: I0121 15:41:59.938523 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.020312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") pod \"69b1aca2-2d07-48af-8875-7f4600c6761c\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.020388 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") pod \"69b1aca2-2d07-48af-8875-7f4600c6761c\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.020431 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") pod \"69b1aca2-2d07-48af-8875-7f4600c6761c\" (UID: \"69b1aca2-2d07-48af-8875-7f4600c6761c\") " Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.021326 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities" (OuterVolumeSpecName: "utilities") pod "69b1aca2-2d07-48af-8875-7f4600c6761c" (UID: "69b1aca2-2d07-48af-8875-7f4600c6761c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026833 4902 generic.go:334] "Generic (PLEG): container finished" podID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" exitCode=0 Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105"} Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026904 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wvsn8" event={"ID":"69b1aca2-2d07-48af-8875-7f4600c6761c","Type":"ContainerDied","Data":"08bfe779f035ed88017c1165646e93b26c0f9c40cf978a2efa9fac64b28aafd0"} Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026921 4902 scope.go:117] "RemoveContainer" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.026952 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wvsn8" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.029600 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4" (OuterVolumeSpecName: "kube-api-access-l59p4") pod "69b1aca2-2d07-48af-8875-7f4600c6761c" (UID: "69b1aca2-2d07-48af-8875-7f4600c6761c"). InnerVolumeSpecName "kube-api-access-l59p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.044832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b1aca2-2d07-48af-8875-7f4600c6761c" (UID: "69b1aca2-2d07-48af-8875-7f4600c6761c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.060262 4902 scope.go:117] "RemoveContainer" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.078602 4902 scope.go:117] "RemoveContainer" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.101559 4902 scope.go:117] "RemoveContainer" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" Jan 21 15:42:00 crc kubenswrapper[4902]: E0121 15:42:00.102170 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105\": container with ID starting with 86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105 not found: ID does not exist" containerID="86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102247 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105"} err="failed to get container status \"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105\": rpc error: code = NotFound desc = could not find container \"86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105\": container with ID starting with 86f46c7cbddc80f072f4dacf37de010e346b7e55d193eea27c1618d181399105 not found: ID does not exist" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102280 4902 scope.go:117] "RemoveContainer" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" Jan 21 15:42:00 crc kubenswrapper[4902]: E0121 15:42:00.102709 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5\": container with ID starting with 85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5 not found: ID does not exist" containerID="85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102853 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5"} err="failed to get container status \"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5\": rpc error: code = NotFound desc = could not find container \"85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5\": container with ID starting with 85ff174f936d179ddfd6297bff76d9a9771838b858429659faed4e47e0545ef5 not found: ID does not exist" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.102889 4902 scope.go:117] "RemoveContainer" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" Jan 21 15:42:00 crc kubenswrapper[4902]: E0121 15:42:00.103189 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297\": container with ID starting with b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297 not found: ID does not exist" containerID="b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.103215 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297"} err="failed to get container status \"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297\": rpc error: code = NotFound desc = could not find container \"b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297\": container with ID starting with b9f4399bbaf4897e03bc040bc09f00c05acd204b6bc6dc96db1b2de938557297 not found: ID does not exist" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.121405 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.121434 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l59p4\" (UniqueName: \"kubernetes.io/projected/69b1aca2-2d07-48af-8875-7f4600c6761c-kube-api-access-l59p4\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.121445 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1aca2-2d07-48af-8875-7f4600c6761c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.392507 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:42:00 crc kubenswrapper[4902]: I0121 15:42:00.397569 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wvsn8"] Jan 21 15:42:02 crc kubenswrapper[4902]: I0121 15:42:02.308629 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" path="/var/lib/kubelet/pods/69b1aca2-2d07-48af-8875-7f4600c6761c/volumes" Jan 21 15:42:04 crc kubenswrapper[4902]: I0121 15:42:04.299490 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:04 crc kubenswrapper[4902]: E0121 15:42:04.300105 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:15 crc kubenswrapper[4902]: I0121 15:42:15.294343 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:15 crc kubenswrapper[4902]: E0121 15:42:15.295157 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:26 crc kubenswrapper[4902]: I0121 15:42:26.295254 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:26 crc kubenswrapper[4902]: E0121 15:42:26.295865 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:39 crc kubenswrapper[4902]: I0121 15:42:39.295198 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:39 crc kubenswrapper[4902]: E0121 15:42:39.295985 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:42:50 crc kubenswrapper[4902]: I0121 15:42:50.295395 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:42:50 crc kubenswrapper[4902]: E0121 15:42:50.296267 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:43:01 crc kubenswrapper[4902]: I0121 15:43:01.295142 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:43:01 crc kubenswrapper[4902]: E0121 15:43:01.296196 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:43:14 crc kubenswrapper[4902]: I0121 15:43:14.294872 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:43:14 crc kubenswrapper[4902]: E0121 15:43:14.295717 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:43:27 crc kubenswrapper[4902]: I0121 15:43:27.294830 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:43:27 crc kubenswrapper[4902]: I0121 15:43:27.800712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27"} Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.246704 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:16 crc kubenswrapper[4902]: E0121 15:44:16.249756 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-content" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.249821 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-content" Jan 21 15:44:16 crc kubenswrapper[4902]: E0121 15:44:16.249867 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-utilities" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.249884 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="extract-utilities" Jan 21 15:44:16 crc kubenswrapper[4902]: E0121 15:44:16.249943 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.249960 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.250336 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b1aca2-2d07-48af-8875-7f4600c6761c" containerName="registry-server" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.252537 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.312646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.428484 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.428538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.428623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.530610 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.530693 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.530725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.531453 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.531749 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.555220 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"community-operators-vl7zv\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.607677 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:16 crc kubenswrapper[4902]: I0121 15:44:16.925964 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:17 crc kubenswrapper[4902]: I0121 15:44:17.199589 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" exitCode=0 Jan 21 15:44:17 crc kubenswrapper[4902]: I0121 15:44:17.199637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07"} Jan 21 15:44:17 crc kubenswrapper[4902]: I0121 15:44:17.199875 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerStarted","Data":"f60df5e2d75103334e1413a85fb112dc4d9fef95fbf03278ba675ec2bcfacaf4"} Jan 21 15:44:18 crc kubenswrapper[4902]: I0121 15:44:18.211837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerStarted","Data":"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999"} Jan 21 15:44:19 crc kubenswrapper[4902]: I0121 15:44:19.223320 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" exitCode=0 Jan 21 15:44:19 crc kubenswrapper[4902]: I0121 15:44:19.223435 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999"} Jan 21 15:44:21 crc kubenswrapper[4902]: I0121 15:44:21.240227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerStarted","Data":"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436"} Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.608841 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.609573 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.689648 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:26 crc kubenswrapper[4902]: I0121 15:44:26.721601 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vl7zv" podStartSLOduration=7.254519193 podStartE2EDuration="10.721582256s" podCreationTimestamp="2026-01-21 15:44:16 +0000 UTC" firstStartedPulling="2026-01-21 15:44:17.200958266 +0000 UTC m=+4219.277791295" lastFinishedPulling="2026-01-21 15:44:20.668021289 +0000 UTC m=+4222.744854358" observedRunningTime="2026-01-21 15:44:21.272603882 +0000 UTC m=+4223.349436911" watchObservedRunningTime="2026-01-21 15:44:26.721582256 +0000 UTC m=+4228.798415295" Jan 21 15:44:27 crc kubenswrapper[4902]: I0121 15:44:27.359385 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:27 crc kubenswrapper[4902]: I0121 15:44:27.410578 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:29 crc kubenswrapper[4902]: I0121 15:44:29.305222 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vl7zv" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" containerID="cri-o://a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" gracePeriod=2 Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.224563 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319300 4902 generic.go:334] "Generic (PLEG): container finished" podID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" exitCode=0 Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319408 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vl7zv" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319428 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436"} Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vl7zv" event={"ID":"ca9083b7-b28b-4908-8185-7284e29e74d9","Type":"ContainerDied","Data":"f60df5e2d75103334e1413a85fb112dc4d9fef95fbf03278ba675ec2bcfacaf4"} Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.319818 4902 scope.go:117] "RemoveContainer" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.341361 4902 scope.go:117] "RemoveContainer" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.353302 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") pod \"ca9083b7-b28b-4908-8185-7284e29e74d9\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.353391 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") pod \"ca9083b7-b28b-4908-8185-7284e29e74d9\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.353529 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") pod \"ca9083b7-b28b-4908-8185-7284e29e74d9\" (UID: \"ca9083b7-b28b-4908-8185-7284e29e74d9\") " Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.354394 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities" (OuterVolumeSpecName: "utilities") pod "ca9083b7-b28b-4908-8185-7284e29e74d9" (UID: "ca9083b7-b28b-4908-8185-7284e29e74d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.359005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz" (OuterVolumeSpecName: "kube-api-access-7lfsz") pod "ca9083b7-b28b-4908-8185-7284e29e74d9" (UID: "ca9083b7-b28b-4908-8185-7284e29e74d9"). InnerVolumeSpecName "kube-api-access-7lfsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.367509 4902 scope.go:117] "RemoveContainer" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.400617 4902 scope.go:117] "RemoveContainer" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" Jan 21 15:44:30 crc kubenswrapper[4902]: E0121 15:44:30.401163 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436\": container with ID starting with a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436 not found: ID does not exist" containerID="a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401205 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436"} err="failed to get container status \"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436\": rpc error: code = NotFound desc = could not find container \"a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436\": container with ID starting with a5c368a0a3138b6e1991c8056756fedae36251e44d9099de0c7a0ae79c210436 not found: ID does not exist" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401231 4902 scope.go:117] "RemoveContainer" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" Jan 21 15:44:30 crc kubenswrapper[4902]: E0121 15:44:30.401865 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999\": container with ID starting with f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999 not found: ID does not exist" containerID="f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401937 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999"} err="failed to get container status \"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999\": rpc error: code = NotFound desc = could not find container \"f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999\": container with ID starting with f6780a8c02a9189221781f9d3cf9782a5e83cfe7b2b2ee40c8ef78c897007999 not found: ID does not exist" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.401983 4902 scope.go:117] "RemoveContainer" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" Jan 21 15:44:30 crc kubenswrapper[4902]: E0121 15:44:30.402486 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07\": container with ID starting with 51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07 not found: ID does not exist" containerID="51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.402524 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07"} err="failed to get container status \"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07\": rpc error: code = NotFound desc = could not find container \"51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07\": container with ID starting with 51a6784d0a68c2ae3efcdd0218588e07793335aa2e7cb59e8e1d6ecab61f8e07 not found: ID does not exist" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.415732 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca9083b7-b28b-4908-8185-7284e29e74d9" (UID: "ca9083b7-b28b-4908-8185-7284e29e74d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.455501 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.455535 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lfsz\" (UniqueName: \"kubernetes.io/projected/ca9083b7-b28b-4908-8185-7284e29e74d9-kube-api-access-7lfsz\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.455548 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca9083b7-b28b-4908-8185-7284e29e74d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.656162 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:30 crc kubenswrapper[4902]: I0121 15:44:30.663521 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vl7zv"] Jan 21 15:44:32 crc kubenswrapper[4902]: I0121 15:44:32.306269 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" path="/var/lib/kubelet/pods/ca9083b7-b28b-4908-8185-7284e29e74d9/volumes" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.183417 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 15:45:00 crc kubenswrapper[4902]: E0121 15:45:00.184300 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184318 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4902]: E0121 15:45:00.184352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-content" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184361 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-content" Jan 21 15:45:00 crc kubenswrapper[4902]: E0121 15:45:00.184377 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-utilities" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184386 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="extract-utilities" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.184544 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9083b7-b28b-4908-8185-7284e29e74d9" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.185180 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.188523 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.188968 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.191014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.322081 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.322147 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.322306 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.424073 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.424149 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.424230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.425698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.431283 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.445897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"collect-profiles-29483505-qjs6m\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.513322 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:00 crc kubenswrapper[4902]: I0121 15:45:00.936738 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 15:45:01 crc kubenswrapper[4902]: I0121 15:45:01.594872 4902 generic.go:334] "Generic (PLEG): container finished" podID="6893ec42-9882-4d98-9d44-ab57d7366115" containerID="d06aac15e4e0103b43e5e004729564b5803ddb7e6af160a1d792ad3827466cc3" exitCode=0 Jan 21 15:45:01 crc kubenswrapper[4902]: I0121 15:45:01.596040 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" event={"ID":"6893ec42-9882-4d98-9d44-ab57d7366115","Type":"ContainerDied","Data":"d06aac15e4e0103b43e5e004729564b5803ddb7e6af160a1d792ad3827466cc3"} Jan 21 15:45:01 crc kubenswrapper[4902]: I0121 15:45:01.596206 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" event={"ID":"6893ec42-9882-4d98-9d44-ab57d7366115","Type":"ContainerStarted","Data":"b765903e04dab520f1ef47d032e2c1d9572c41170886af8820e8387445ee2867"} Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.833683 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.957869 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") pod \"6893ec42-9882-4d98-9d44-ab57d7366115\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.957965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") pod \"6893ec42-9882-4d98-9d44-ab57d7366115\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.958161 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") pod \"6893ec42-9882-4d98-9d44-ab57d7366115\" (UID: \"6893ec42-9882-4d98-9d44-ab57d7366115\") " Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.958944 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume" (OuterVolumeSpecName: "config-volume") pod "6893ec42-9882-4d98-9d44-ab57d7366115" (UID: "6893ec42-9882-4d98-9d44-ab57d7366115"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.965194 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6893ec42-9882-4d98-9d44-ab57d7366115" (UID: "6893ec42-9882-4d98-9d44-ab57d7366115"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:45:02 crc kubenswrapper[4902]: I0121 15:45:02.965422 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b" (OuterVolumeSpecName: "kube-api-access-mrw8b") pod "6893ec42-9882-4d98-9d44-ab57d7366115" (UID: "6893ec42-9882-4d98-9d44-ab57d7366115"). InnerVolumeSpecName "kube-api-access-mrw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.059919 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrw8b\" (UniqueName: \"kubernetes.io/projected/6893ec42-9882-4d98-9d44-ab57d7366115-kube-api-access-mrw8b\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.059962 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6893ec42-9882-4d98-9d44-ab57d7366115-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.059974 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6893ec42-9882-4d98-9d44-ab57d7366115-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.613690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" event={"ID":"6893ec42-9882-4d98-9d44-ab57d7366115","Type":"ContainerDied","Data":"b765903e04dab520f1ef47d032e2c1d9572c41170886af8820e8387445ee2867"} Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.613744 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b765903e04dab520f1ef47d032e2c1d9572c41170886af8820e8387445ee2867" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.613830 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m" Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.911961 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:45:03 crc kubenswrapper[4902]: I0121 15:45:03.917201 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-qn2th"] Jan 21 15:45:04 crc kubenswrapper[4902]: I0121 15:45:04.308541 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ada0d02-9902-4746-b1ad-42b3f9e711a7" path="/var/lib/kubelet/pods/0ada0d02-9902-4746-b1ad-42b3f9e711a7/volumes" Jan 21 15:45:22 crc kubenswrapper[4902]: I0121 15:45:22.431761 4902 scope.go:117] "RemoveContainer" containerID="7ee1e059c9213e4cad45fc2396c6626d215288fb3b3b38f6079f8306a505e407" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.307710 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:41 crc kubenswrapper[4902]: E0121 15:45:41.308552 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" containerName="collect-profiles" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.308570 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" containerName="collect-profiles" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.308750 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" containerName="collect-profiles" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.309957 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.320086 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.485830 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.485880 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.485950 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587101 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587637 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.587969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.613016 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"redhat-operators-2zfl9\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:41 crc kubenswrapper[4902]: I0121 15:45:41.633064 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.113960 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.940081 4902 generic.go:334] "Generic (PLEG): container finished" podID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerID="636d22adb461bb6373e2fe80b61c78f4fbed5473aeb591006417a99bb62d7944" exitCode=0 Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.940332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"636d22adb461bb6373e2fe80b61c78f4fbed5473aeb591006417a99bb62d7944"} Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.940359 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerStarted","Data":"18a0fa1321e32542790fd0c6a88b5c886bd6611b61f239edf8b213986060be22"} Jan 21 15:45:42 crc kubenswrapper[4902]: I0121 15:45:42.941986 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:45:43 crc kubenswrapper[4902]: I0121 15:45:43.949017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerStarted","Data":"dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c"} Jan 21 15:45:44 crc kubenswrapper[4902]: I0121 15:45:44.958557 4902 generic.go:334] "Generic (PLEG): container finished" podID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerID="dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c" exitCode=0 Jan 21 15:45:44 crc kubenswrapper[4902]: I0121 15:45:44.958619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c"} Jan 21 15:45:45 crc kubenswrapper[4902]: I0121 15:45:45.967411 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerStarted","Data":"80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363"} Jan 21 15:45:45 crc kubenswrapper[4902]: I0121 15:45:45.986346 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zfl9" podStartSLOduration=2.576641285 podStartE2EDuration="4.986324474s" podCreationTimestamp="2026-01-21 15:45:41 +0000 UTC" firstStartedPulling="2026-01-21 15:45:42.94174838 +0000 UTC m=+4305.018581409" lastFinishedPulling="2026-01-21 15:45:45.351431569 +0000 UTC m=+4307.428264598" observedRunningTime="2026-01-21 15:45:45.985532491 +0000 UTC m=+4308.062365520" watchObservedRunningTime="2026-01-21 15:45:45.986324474 +0000 UTC m=+4308.063157513" Jan 21 15:45:47 crc kubenswrapper[4902]: I0121 15:45:47.769454 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:45:47 crc kubenswrapper[4902]: I0121 15:45:47.769791 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:45:51 crc kubenswrapper[4902]: I0121 15:45:51.633752 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:51 crc kubenswrapper[4902]: I0121 15:45:51.634083 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:51 crc kubenswrapper[4902]: I0121 15:45:51.678911 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:52 crc kubenswrapper[4902]: I0121 15:45:52.098842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:52 crc kubenswrapper[4902]: I0121 15:45:52.166287 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:54 crc kubenswrapper[4902]: I0121 15:45:54.018767 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2zfl9" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" containerID="cri-o://80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363" gracePeriod=2 Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.040714 4902 generic.go:334] "Generic (PLEG): container finished" podID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerID="80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363" exitCode=0 Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.040813 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363"} Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.041268 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfl9" event={"ID":"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3","Type":"ContainerDied","Data":"18a0fa1321e32542790fd0c6a88b5c886bd6611b61f239edf8b213986060be22"} Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.041285 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a0fa1321e32542790fd0c6a88b5c886bd6611b61f239edf8b213986060be22" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.069424 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.210734 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") pod \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.210839 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") pod \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.210914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") pod \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\" (UID: \"0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3\") " Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.211883 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities" (OuterVolumeSpecName: "utilities") pod "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" (UID: "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.221315 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t" (OuterVolumeSpecName: "kube-api-access-7pl8t") pod "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" (UID: "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3"). InnerVolumeSpecName "kube-api-access-7pl8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.313613 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.313789 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pl8t\" (UniqueName: \"kubernetes.io/projected/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-kube-api-access-7pl8t\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.328132 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" (UID: "0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:45:57 crc kubenswrapper[4902]: I0121 15:45:57.414993 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.046936 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfl9" Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.076671 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.088623 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2zfl9"] Jan 21 15:45:58 crc kubenswrapper[4902]: I0121 15:45:58.307419 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" path="/var/lib/kubelet/pods/0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3/volumes" Jan 21 15:46:17 crc kubenswrapper[4902]: I0121 15:46:17.769663 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:46:17 crc kubenswrapper[4902]: I0121 15:46:17.770311 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.769495 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.770260 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.770324 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.771204 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:46:47 crc kubenswrapper[4902]: I0121 15:46:47.771294 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27" gracePeriod=600 Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430004 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27" exitCode=0 Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430333 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27"} Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430366 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea"} Jan 21 15:46:48 crc kubenswrapper[4902]: I0121 15:46:48.430386 4902 scope.go:117] "RemoveContainer" containerID="78dda59ea56adfe96ca94f2f8d5e19344102d9ef8254e50fb64a5106f214a321" Jan 21 15:49:17 crc kubenswrapper[4902]: I0121 15:49:17.770289 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:17 crc kubenswrapper[4902]: I0121 15:49:17.770875 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:47 crc kubenswrapper[4902]: I0121 15:49:47.769378 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:47 crc kubenswrapper[4902]: I0121 15:49:47.770070 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.769347 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.769866 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.769929 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.770707 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.770784 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" gracePeriod=600 Jan 21 15:50:17 crc kubenswrapper[4902]: E0121 15:50:17.895333 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.989915 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" exitCode=0 Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.989967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea"} Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.990008 4902 scope.go:117] "RemoveContainer" containerID="96a0c468edaf7e0a12819e67dd2a8451666decc72c19f24c18c03a05bac8ba27" Jan 21 15:50:17 crc kubenswrapper[4902]: I0121 15:50:17.990527 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:50:17 crc kubenswrapper[4902]: E0121 15:50:17.990788 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.627493 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:30 crc kubenswrapper[4902]: E0121 15:50:30.628188 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-utilities" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628200 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-utilities" Jan 21 15:50:30 crc kubenswrapper[4902]: E0121 15:50:30.628219 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-content" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628224 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="extract-content" Jan 21 15:50:30 crc kubenswrapper[4902]: E0121 15:50:30.628242 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628248 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.628371 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f86f4fb-0c04-4fc9-b9ab-5adbff90fbd3" containerName="registry-server" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.631439 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.635655 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.748412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.748500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.748552 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.850832 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.850974 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.851010 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.851528 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.851527 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.873029 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"certified-operators-x6vbm\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:30 crc kubenswrapper[4902]: I0121 15:50:30.954088 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:31 crc kubenswrapper[4902]: I0121 15:50:31.468034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.104968 4902 generic.go:334] "Generic (PLEG): container finished" podID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerID="3fd7269ed4af2b5ed8789200b615c63fc1a7f708f657559905419462e7af7de1" exitCode=0 Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.105134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"3fd7269ed4af2b5ed8789200b615c63fc1a7f708f657559905419462e7af7de1"} Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.105377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerStarted","Data":"c7ba1146533bc1aa0237851f5de15a6b9343afd48314d2c3d47a704f104f667a"} Jan 21 15:50:32 crc kubenswrapper[4902]: I0121 15:50:32.294804 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:50:32 crc kubenswrapper[4902]: E0121 15:50:32.295235 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:33 crc kubenswrapper[4902]: I0121 15:50:33.112961 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerStarted","Data":"832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03"} Jan 21 15:50:34 crc kubenswrapper[4902]: I0121 15:50:34.132383 4902 generic.go:334] "Generic (PLEG): container finished" podID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerID="832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03" exitCode=0 Jan 21 15:50:34 crc kubenswrapper[4902]: I0121 15:50:34.132475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03"} Jan 21 15:50:35 crc kubenswrapper[4902]: I0121 15:50:35.142798 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerStarted","Data":"01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a"} Jan 21 15:50:35 crc kubenswrapper[4902]: I0121 15:50:35.169009 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6vbm" podStartSLOduration=2.526322616 podStartE2EDuration="5.168983512s" podCreationTimestamp="2026-01-21 15:50:30 +0000 UTC" firstStartedPulling="2026-01-21 15:50:32.107099694 +0000 UTC m=+4594.183932723" lastFinishedPulling="2026-01-21 15:50:34.74976059 +0000 UTC m=+4596.826593619" observedRunningTime="2026-01-21 15:50:35.156260734 +0000 UTC m=+4597.233093773" watchObservedRunningTime="2026-01-21 15:50:35.168983512 +0000 UTC m=+4597.245816541" Jan 21 15:50:40 crc kubenswrapper[4902]: I0121 15:50:40.955270 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:40 crc kubenswrapper[4902]: I0121 15:50:40.955732 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:41 crc kubenswrapper[4902]: I0121 15:50:41.446646 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:41 crc kubenswrapper[4902]: I0121 15:50:41.493134 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:41 crc kubenswrapper[4902]: I0121 15:50:41.685121 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:43 crc kubenswrapper[4902]: I0121 15:50:43.197859 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x6vbm" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" containerID="cri-o://01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a" gracePeriod=2 Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.206899 4902 generic.go:334] "Generic (PLEG): container finished" podID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerID="01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a" exitCode=0 Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.207005 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a"} Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.207305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6vbm" event={"ID":"c9df3040-081e-4e88-8681-9a9f78cc758b","Type":"ContainerDied","Data":"c7ba1146533bc1aa0237851f5de15a6b9343afd48314d2c3d47a704f104f667a"} Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.207325 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ba1146533bc1aa0237851f5de15a6b9343afd48314d2c3d47a704f104f667a" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.259322 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.348104 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") pod \"c9df3040-081e-4e88-8681-9a9f78cc758b\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.348189 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") pod \"c9df3040-081e-4e88-8681-9a9f78cc758b\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.348251 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") pod \"c9df3040-081e-4e88-8681-9a9f78cc758b\" (UID: \"c9df3040-081e-4e88-8681-9a9f78cc758b\") " Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.356131 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities" (OuterVolumeSpecName: "utilities") pod "c9df3040-081e-4e88-8681-9a9f78cc758b" (UID: "c9df3040-081e-4e88-8681-9a9f78cc758b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.357298 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l" (OuterVolumeSpecName: "kube-api-access-jv95l") pod "c9df3040-081e-4e88-8681-9a9f78cc758b" (UID: "c9df3040-081e-4e88-8681-9a9f78cc758b"). InnerVolumeSpecName "kube-api-access-jv95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.400901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9df3040-081e-4e88-8681-9a9f78cc758b" (UID: "c9df3040-081e-4e88-8681-9a9f78cc758b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.449574 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.449617 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv95l\" (UniqueName: \"kubernetes.io/projected/c9df3040-081e-4e88-8681-9a9f78cc758b-kube-api-access-jv95l\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:44 crc kubenswrapper[4902]: I0121 15:50:44.449635 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9df3040-081e-4e88-8681-9a9f78cc758b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:45 crc kubenswrapper[4902]: I0121 15:50:45.214410 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6vbm" Jan 21 15:50:45 crc kubenswrapper[4902]: I0121 15:50:45.257999 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:45 crc kubenswrapper[4902]: I0121 15:50:45.267164 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x6vbm"] Jan 21 15:50:46 crc kubenswrapper[4902]: I0121 15:50:46.295006 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:50:46 crc kubenswrapper[4902]: E0121 15:50:46.295321 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:50:46 crc kubenswrapper[4902]: I0121 15:50:46.310416 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" path="/var/lib/kubelet/pods/c9df3040-081e-4e88-8681-9a9f78cc758b/volumes" Jan 21 15:51:00 crc kubenswrapper[4902]: I0121 15:51:00.294894 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:00 crc kubenswrapper[4902]: E0121 15:51:00.295472 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:12 crc kubenswrapper[4902]: I0121 15:51:12.294848 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:12 crc kubenswrapper[4902]: E0121 15:51:12.296630 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:24 crc kubenswrapper[4902]: I0121 15:51:24.294681 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:24 crc kubenswrapper[4902]: E0121 15:51:24.296555 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:38 crc kubenswrapper[4902]: I0121 15:51:38.299300 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:38 crc kubenswrapper[4902]: E0121 15:51:38.300054 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.959366 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:51:49 crc kubenswrapper[4902]: E0121 15:51:49.960409 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-content" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960429 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-content" Jan 21 15:51:49 crc kubenswrapper[4902]: E0121 15:51:49.960473 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-utilities" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960485 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="extract-utilities" Jan 21 15:51:49 crc kubenswrapper[4902]: E0121 15:51:49.960506 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960518 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.960729 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9df3040-081e-4e88-8681-9a9f78cc758b" containerName="registry-server" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.962354 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:49 crc kubenswrapper[4902]: I0121 15:51:49.978411 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.025797 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.025850 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.026140 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.127829 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.127920 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.127998 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.128404 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.128415 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.146821 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"redhat-marketplace-5djtc\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.290087 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.509752 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:51:50 crc kubenswrapper[4902]: I0121 15:51:50.710274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerStarted","Data":"758b8cb5d5d9a16f8f5849e8f336c87678d827aa78b7f59e5af43acd178efc32"} Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.295358 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:51:51 crc kubenswrapper[4902]: E0121 15:51:51.295607 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.727606 4902 generic.go:334] "Generic (PLEG): container finished" podID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" exitCode=0 Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.727692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4"} Jan 21 15:51:51 crc kubenswrapper[4902]: I0121 15:51:51.730974 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:51:53 crc kubenswrapper[4902]: E0121 15:51:53.005134 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16275b0c_9958_4f4c_aacb_bdeed1dea4e9.slice/crio-conmon-cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:51:53 crc kubenswrapper[4902]: I0121 15:51:53.746897 4902 generic.go:334] "Generic (PLEG): container finished" podID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" exitCode=0 Jan 21 15:51:53 crc kubenswrapper[4902]: I0121 15:51:53.747026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7"} Jan 21 15:51:54 crc kubenswrapper[4902]: I0121 15:51:54.757064 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerStarted","Data":"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f"} Jan 21 15:51:54 crc kubenswrapper[4902]: I0121 15:51:54.779539 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5djtc" podStartSLOduration=3.376056637 podStartE2EDuration="5.77951659s" podCreationTimestamp="2026-01-21 15:51:49 +0000 UTC" firstStartedPulling="2026-01-21 15:51:51.730690569 +0000 UTC m=+4673.807523598" lastFinishedPulling="2026-01-21 15:51:54.134150512 +0000 UTC m=+4676.210983551" observedRunningTime="2026-01-21 15:51:54.773735467 +0000 UTC m=+4676.850568536" watchObservedRunningTime="2026-01-21 15:51:54.77951659 +0000 UTC m=+4676.856349639" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.290823 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.291575 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.337835 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.865538 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:00 crc kubenswrapper[4902]: I0121 15:52:00.918676 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:52:02 crc kubenswrapper[4902]: I0121 15:52:02.813493 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5djtc" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" containerID="cri-o://21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" gracePeriod=2 Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.261875 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336143 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") pod \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") pod \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336489 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") pod \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\" (UID: \"16275b0c-9958-4f4c-aacb-bdeed1dea4e9\") " Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.336892 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities" (OuterVolumeSpecName: "utilities") pod "16275b0c-9958-4f4c-aacb-bdeed1dea4e9" (UID: "16275b0c-9958-4f4c-aacb-bdeed1dea4e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.337098 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.341848 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh" (OuterVolumeSpecName: "kube-api-access-6cgxh") pod "16275b0c-9958-4f4c-aacb-bdeed1dea4e9" (UID: "16275b0c-9958-4f4c-aacb-bdeed1dea4e9"). InnerVolumeSpecName "kube-api-access-6cgxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.365848 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16275b0c-9958-4f4c-aacb-bdeed1dea4e9" (UID: "16275b0c-9958-4f4c-aacb-bdeed1dea4e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.437862 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.437902 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgxh\" (UniqueName: \"kubernetes.io/projected/16275b0c-9958-4f4c-aacb-bdeed1dea4e9-kube-api-access-6cgxh\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825854 4902 generic.go:334] "Generic (PLEG): container finished" podID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" exitCode=0 Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825919 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f"} Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825960 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5djtc" event={"ID":"16275b0c-9958-4f4c-aacb-bdeed1dea4e9","Type":"ContainerDied","Data":"758b8cb5d5d9a16f8f5849e8f336c87678d827aa78b7f59e5af43acd178efc32"} Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.825989 4902 scope.go:117] "RemoveContainer" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.828239 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5djtc" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.852524 4902 scope.go:117] "RemoveContainer" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.883722 4902 scope.go:117] "RemoveContainer" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.883773 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.907659 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5djtc"] Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.912705 4902 scope.go:117] "RemoveContainer" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" Jan 21 15:52:03 crc kubenswrapper[4902]: E0121 15:52:03.913484 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f\": container with ID starting with 21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f not found: ID does not exist" containerID="21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.913586 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f"} err="failed to get container status \"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f\": rpc error: code = NotFound desc = could not find container \"21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f\": container with ID starting with 21b36182f271170213ba3c2c31779643fdde340cd295415c05d0fe51f8d8358f not found: ID does not exist" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.913671 4902 scope.go:117] "RemoveContainer" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" Jan 21 15:52:03 crc kubenswrapper[4902]: E0121 15:52:03.914059 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7\": container with ID starting with cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7 not found: ID does not exist" containerID="cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.914120 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7"} err="failed to get container status \"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7\": rpc error: code = NotFound desc = could not find container \"cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7\": container with ID starting with cb47402a2d2b7ddc2592d2f61fcd51b2b2a31030c8c5a0793b7af614e954c1a7 not found: ID does not exist" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.914155 4902 scope.go:117] "RemoveContainer" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" Jan 21 15:52:03 crc kubenswrapper[4902]: E0121 15:52:03.914721 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4\": container with ID starting with 283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4 not found: ID does not exist" containerID="283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4" Jan 21 15:52:03 crc kubenswrapper[4902]: I0121 15:52:03.914871 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4"} err="failed to get container status \"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4\": rpc error: code = NotFound desc = could not find container \"283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4\": container with ID starting with 283aa2972e0fc118599651d1e581abbd577bd66bd4fb6144d2437d9db0ff88e4 not found: ID does not exist" Jan 21 15:52:04 crc kubenswrapper[4902]: I0121 15:52:04.307093 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" path="/var/lib/kubelet/pods/16275b0c-9958-4f4c-aacb-bdeed1dea4e9/volumes" Jan 21 15:52:06 crc kubenswrapper[4902]: I0121 15:52:06.295630 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:06 crc kubenswrapper[4902]: E0121 15:52:06.296120 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:20 crc kubenswrapper[4902]: I0121 15:52:20.295352 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:20 crc kubenswrapper[4902]: E0121 15:52:20.296495 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:22 crc kubenswrapper[4902]: I0121 15:52:22.576792 4902 scope.go:117] "RemoveContainer" containerID="dd11e45cc0c74ef372b06f28ed0e8d30f8550b3d0f4207853db07f59a58acb6c" Jan 21 15:52:22 crc kubenswrapper[4902]: I0121 15:52:22.604065 4902 scope.go:117] "RemoveContainer" containerID="80bde30b7f08416842fa9bf564e5ae365cc209b3408a91a7116d04988188a363" Jan 21 15:52:22 crc kubenswrapper[4902]: I0121 15:52:22.658230 4902 scope.go:117] "RemoveContainer" containerID="636d22adb461bb6373e2fe80b61c78f4fbed5473aeb591006417a99bb62d7944" Jan 21 15:52:32 crc kubenswrapper[4902]: I0121 15:52:32.295097 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:32 crc kubenswrapper[4902]: E0121 15:52:32.296312 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:45 crc kubenswrapper[4902]: I0121 15:52:45.295997 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:45 crc kubenswrapper[4902]: E0121 15:52:45.296950 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:52:57 crc kubenswrapper[4902]: I0121 15:52:57.295077 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:52:57 crc kubenswrapper[4902]: E0121 15:52:57.296233 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:08 crc kubenswrapper[4902]: I0121 15:53:08.299689 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:08 crc kubenswrapper[4902]: E0121 15:53:08.300691 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:23 crc kubenswrapper[4902]: I0121 15:53:23.295354 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:23 crc kubenswrapper[4902]: E0121 15:53:23.296296 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:35 crc kubenswrapper[4902]: I0121 15:53:35.295414 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:35 crc kubenswrapper[4902]: E0121 15:53:35.296421 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:47 crc kubenswrapper[4902]: I0121 15:53:47.294842 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:53:47 crc kubenswrapper[4902]: E0121 15:53:47.295718 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.402283 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.413771 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-2v7g4"] Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.543832 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:53:56 crc kubenswrapper[4902]: E0121 15:53:56.544216 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-utilities" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544236 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-utilities" Jan 21 15:53:56 crc kubenswrapper[4902]: E0121 15:53:56.544261 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-content" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544271 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="extract-content" Jan 21 15:53:56 crc kubenswrapper[4902]: E0121 15:53:56.544296 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544304 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544476 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="16275b0c-9958-4f4c-aacb-bdeed1dea4e9" containerName="registry-server" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.544990 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.547670 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.547699 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.547681 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.549154 4902 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-qwwr2" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.559326 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.682910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.682955 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.683076 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.784787 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.784877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.784913 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.785832 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.786402 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.815862 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"crc-storage-crc-q2dqw\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:56 crc kubenswrapper[4902]: I0121 15:53:56.917605 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:53:57 crc kubenswrapper[4902]: I0121 15:53:57.396988 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:53:57 crc kubenswrapper[4902]: I0121 15:53:57.740423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q2dqw" event={"ID":"446c29bb-358e-4b5a-adaa-e4b06dc62edf","Type":"ContainerStarted","Data":"5055af7fe172f0c127bf4f00512c45bf861271d3b80f2acd1cf94660106078be"} Jan 21 15:53:58 crc kubenswrapper[4902]: I0121 15:53:58.320935 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33301553-deaa-4183-9538-1a43f822be80" path="/var/lib/kubelet/pods/33301553-deaa-4183-9538-1a43f822be80/volumes" Jan 21 15:53:58 crc kubenswrapper[4902]: I0121 15:53:58.750754 4902 generic.go:334] "Generic (PLEG): container finished" podID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerID="8cefa707fcc5de9979cdbb8b42dd928ba6a77070fd6ce0a791939df6996a702e" exitCode=0 Jan 21 15:53:58 crc kubenswrapper[4902]: I0121 15:53:58.752067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q2dqw" event={"ID":"446c29bb-358e-4b5a-adaa-e4b06dc62edf","Type":"ContainerDied","Data":"8cefa707fcc5de9979cdbb8b42dd928ba6a77070fd6ce0a791939df6996a702e"} Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.014508 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.138758 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") pod \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.138826 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") pod \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.138871 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") pod \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\" (UID: \"446c29bb-358e-4b5a-adaa-e4b06dc62edf\") " Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.139463 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "446c29bb-358e-4b5a-adaa-e4b06dc62edf" (UID: "446c29bb-358e-4b5a-adaa-e4b06dc62edf"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.145910 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z" (OuterVolumeSpecName: "kube-api-access-j4h8z") pod "446c29bb-358e-4b5a-adaa-e4b06dc62edf" (UID: "446c29bb-358e-4b5a-adaa-e4b06dc62edf"). InnerVolumeSpecName "kube-api-access-j4h8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.161723 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "446c29bb-358e-4b5a-adaa-e4b06dc62edf" (UID: "446c29bb-358e-4b5a-adaa-e4b06dc62edf"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.240859 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4h8z\" (UniqueName: \"kubernetes.io/projected/446c29bb-358e-4b5a-adaa-e4b06dc62edf-kube-api-access-j4h8z\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.240905 4902 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/446c29bb-358e-4b5a-adaa-e4b06dc62edf-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.240925 4902 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/446c29bb-358e-4b5a-adaa-e4b06dc62edf-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.769870 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q2dqw" event={"ID":"446c29bb-358e-4b5a-adaa-e4b06dc62edf","Type":"ContainerDied","Data":"5055af7fe172f0c127bf4f00512c45bf861271d3b80f2acd1cf94660106078be"} Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.769911 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5055af7fe172f0c127bf4f00512c45bf861271d3b80f2acd1cf94660106078be" Jan 21 15:54:00 crc kubenswrapper[4902]: I0121 15:54:00.769973 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q2dqw" Jan 21 15:54:01 crc kubenswrapper[4902]: I0121 15:54:01.295640 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:01 crc kubenswrapper[4902]: E0121 15:54:01.296212 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.306264 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.313214 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-q2dqw"] Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.449673 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-p7fjr"] Jan 21 15:54:02 crc kubenswrapper[4902]: E0121 15:54:02.449948 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerName="storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.449961 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerName="storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.450127 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" containerName="storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.450632 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.453074 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.453164 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.454331 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.455125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-qwwr2" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.466598 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p7fjr"] Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.579555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.579693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.579767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.680653 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.680717 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.680751 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.681214 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.681943 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.698912 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"crc-storage-crc-p7fjr\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:02 crc kubenswrapper[4902]: I0121 15:54:02.767204 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.005864 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-p7fjr"] Jan 21 15:54:03 crc kubenswrapper[4902]: W0121 15:54:03.008853 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8be47d9_db95_4ff5_8d65_2bea0c3d32be.slice/crio-99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb WatchSource:0}: Error finding container 99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb: Status 404 returned error can't find the container with id 99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.793965 4902 generic.go:334] "Generic (PLEG): container finished" podID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerID="4ba7c3ca543296c161204a3805cebef49261b19ee3ebe778fe20553434c19786" exitCode=0 Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.794234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p7fjr" event={"ID":"b8be47d9-db95-4ff5-8d65-2bea0c3d32be","Type":"ContainerDied","Data":"4ba7c3ca543296c161204a3805cebef49261b19ee3ebe778fe20553434c19786"} Jan 21 15:54:03 crc kubenswrapper[4902]: I0121 15:54:03.794263 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p7fjr" event={"ID":"b8be47d9-db95-4ff5-8d65-2bea0c3d32be","Type":"ContainerStarted","Data":"99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb"} Jan 21 15:54:04 crc kubenswrapper[4902]: I0121 15:54:04.302414 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446c29bb-358e-4b5a-adaa-e4b06dc62edf" path="/var/lib/kubelet/pods/446c29bb-358e-4b5a-adaa-e4b06dc62edf/volumes" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.478230 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.625718 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") pod \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.626297 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") pod \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.626421 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") pod \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\" (UID: \"b8be47d9-db95-4ff5-8d65-2bea0c3d32be\") " Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.626432 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b8be47d9-db95-4ff5-8d65-2bea0c3d32be" (UID: "b8be47d9-db95-4ff5-8d65-2bea0c3d32be"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.627228 4902 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.631156 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5" (OuterVolumeSpecName: "kube-api-access-dxwh5") pod "b8be47d9-db95-4ff5-8d65-2bea0c3d32be" (UID: "b8be47d9-db95-4ff5-8d65-2bea0c3d32be"). InnerVolumeSpecName "kube-api-access-dxwh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.645707 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b8be47d9-db95-4ff5-8d65-2bea0c3d32be" (UID: "b8be47d9-db95-4ff5-8d65-2bea0c3d32be"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.728999 4902 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.729030 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxwh5\" (UniqueName: \"kubernetes.io/projected/b8be47d9-db95-4ff5-8d65-2bea0c3d32be-kube-api-access-dxwh5\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.811685 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-p7fjr" event={"ID":"b8be47d9-db95-4ff5-8d65-2bea0c3d32be","Type":"ContainerDied","Data":"99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb"} Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.811723 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f85c2d6f8502e50cf501a7e3ef41712f820c0592badb8e8ad53ed816f55fdb" Jan 21 15:54:05 crc kubenswrapper[4902]: I0121 15:54:05.811734 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-p7fjr" Jan 21 15:54:12 crc kubenswrapper[4902]: I0121 15:54:12.295364 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:12 crc kubenswrapper[4902]: E0121 15:54:12.296265 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:23 crc kubenswrapper[4902]: I0121 15:54:23.648626 4902 scope.go:117] "RemoveContainer" containerID="bea584749b1ccfd891d97d3ebbaf45ab41b6cc3e6efd100d0aa2c6701cc97c94" Jan 21 15:54:24 crc kubenswrapper[4902]: I0121 15:54:24.296210 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:24 crc kubenswrapper[4902]: E0121 15:54:24.297020 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:36 crc kubenswrapper[4902]: I0121 15:54:36.295631 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:36 crc kubenswrapper[4902]: E0121 15:54:36.297392 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:51 crc kubenswrapper[4902]: I0121 15:54:51.294625 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:54:51 crc kubenswrapper[4902]: E0121 15:54:51.295761 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.875240 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:54:54 crc kubenswrapper[4902]: E0121 15:54:54.875597 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerName="storage" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.875613 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerName="storage" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.875799 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8be47d9-db95-4ff5-8d65-2bea0c3d32be" containerName="storage" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.879326 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:54 crc kubenswrapper[4902]: I0121 15:54:54.890128 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.047750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.047816 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.047842 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149413 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149757 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149845 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.149912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.150228 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.170004 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"community-operators-fszmf\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.198558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:54:55 crc kubenswrapper[4902]: I0121 15:54:55.729196 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:54:56 crc kubenswrapper[4902]: I0121 15:54:56.243838 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerStarted","Data":"c61df1d1e5639819b900598787c6c9d4d0639ced8074247f8471086728aefad4"} Jan 21 15:54:57 crc kubenswrapper[4902]: I0121 15:54:57.252247 4902 generic.go:334] "Generic (PLEG): container finished" podID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerID="e9c3b082b60fb672921b1a5f09c1b6f91d4f0b1a8e2ddc94f470bae53c566dfb" exitCode=0 Jan 21 15:54:57 crc kubenswrapper[4902]: I0121 15:54:57.252310 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"e9c3b082b60fb672921b1a5f09c1b6f91d4f0b1a8e2ddc94f470bae53c566dfb"} Jan 21 15:54:58 crc kubenswrapper[4902]: I0121 15:54:58.286879 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerStarted","Data":"edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584"} Jan 21 15:54:59 crc kubenswrapper[4902]: I0121 15:54:59.298838 4902 generic.go:334] "Generic (PLEG): container finished" podID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerID="edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584" exitCode=0 Jan 21 15:54:59 crc kubenswrapper[4902]: I0121 15:54:59.298888 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584"} Jan 21 15:55:00 crc kubenswrapper[4902]: I0121 15:55:00.308399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerStarted","Data":"38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e"} Jan 21 15:55:00 crc kubenswrapper[4902]: I0121 15:55:00.332508 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fszmf" podStartSLOduration=3.901762625 podStartE2EDuration="6.332492984s" podCreationTimestamp="2026-01-21 15:54:54 +0000 UTC" firstStartedPulling="2026-01-21 15:54:57.254066569 +0000 UTC m=+4859.330899608" lastFinishedPulling="2026-01-21 15:54:59.684796938 +0000 UTC m=+4861.761629967" observedRunningTime="2026-01-21 15:55:00.330006084 +0000 UTC m=+4862.406839113" watchObservedRunningTime="2026-01-21 15:55:00.332492984 +0000 UTC m=+4862.409326013" Jan 21 15:55:04 crc kubenswrapper[4902]: I0121 15:55:04.295396 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:55:04 crc kubenswrapper[4902]: E0121 15:55:04.296387 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.199601 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.200002 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.270748 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.390941 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:05 crc kubenswrapper[4902]: I0121 15:55:05.509235 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:55:07 crc kubenswrapper[4902]: I0121 15:55:07.358452 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fszmf" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" containerID="cri-o://38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e" gracePeriod=2 Jan 21 15:55:08 crc kubenswrapper[4902]: I0121 15:55:08.367708 4902 generic.go:334] "Generic (PLEG): container finished" podID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerID="38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e" exitCode=0 Jan 21 15:55:08 crc kubenswrapper[4902]: I0121 15:55:08.367759 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e"} Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.301593 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.369144 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") pod \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.369236 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") pod \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.370294 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities" (OuterVolumeSpecName: "utilities") pod "1ad44bd6-85c0-4945-8d3d-e009a0abc10c" (UID: "1ad44bd6-85c0-4945-8d3d-e009a0abc10c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.370620 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.377677 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fszmf" event={"ID":"1ad44bd6-85c0-4945-8d3d-e009a0abc10c","Type":"ContainerDied","Data":"c61df1d1e5639819b900598787c6c9d4d0639ced8074247f8471086728aefad4"} Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.377733 4902 scope.go:117] "RemoveContainer" containerID="38081958732e9846e753a85b2af88f19986db078acee28d84ec86e2d80ef2d2e" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.377780 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fszmf" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.397703 4902 scope.go:117] "RemoveContainer" containerID="edc76b00aca584cdf879dd18110a5799c944cc08737388cb80875856e9509584" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.415308 4902 scope.go:117] "RemoveContainer" containerID="e9c3b082b60fb672921b1a5f09c1b6f91d4f0b1a8e2ddc94f470bae53c566dfb" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.433253 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ad44bd6-85c0-4945-8d3d-e009a0abc10c" (UID: "1ad44bd6-85c0-4945-8d3d-e009a0abc10c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.471804 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") pod \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\" (UID: \"1ad44bd6-85c0-4945-8d3d-e009a0abc10c\") " Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.472163 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.477155 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc" (OuterVolumeSpecName: "kube-api-access-gkkdc") pod "1ad44bd6-85c0-4945-8d3d-e009a0abc10c" (UID: "1ad44bd6-85c0-4945-8d3d-e009a0abc10c"). InnerVolumeSpecName "kube-api-access-gkkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.573427 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkkdc\" (UniqueName: \"kubernetes.io/projected/1ad44bd6-85c0-4945-8d3d-e009a0abc10c-kube-api-access-gkkdc\") on node \"crc\" DevicePath \"\"" Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.715937 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:55:09 crc kubenswrapper[4902]: I0121 15:55:09.722743 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fszmf"] Jan 21 15:55:10 crc kubenswrapper[4902]: I0121 15:55:10.305655 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" path="/var/lib/kubelet/pods/1ad44bd6-85c0-4945-8d3d-e009a0abc10c/volumes" Jan 21 15:55:17 crc kubenswrapper[4902]: I0121 15:55:17.295823 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:55:17 crc kubenswrapper[4902]: E0121 15:55:17.296553 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 15:55:30 crc kubenswrapper[4902]: I0121 15:55:30.295838 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:55:31 crc kubenswrapper[4902]: I0121 15:55:31.525705 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb"} Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.616999 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:09 crc kubenswrapper[4902]: E0121 15:56:09.617885 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-utilities" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.617907 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-utilities" Jan 21 15:56:09 crc kubenswrapper[4902]: E0121 15:56:09.617929 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-content" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.617935 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="extract-content" Jan 21 15:56:09 crc kubenswrapper[4902]: E0121 15:56:09.617954 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.617961 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.618112 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad44bd6-85c0-4945-8d3d-e009a0abc10c" containerName="registry-server" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.619080 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.623691 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.782676 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.782756 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.782822 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.883770 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.883850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.883878 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.884299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.884383 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.906574 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"redhat-operators-js569\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:09 crc kubenswrapper[4902]: I0121 15:56:09.936862 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.410254 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.847760 4902 generic.go:334] "Generic (PLEG): container finished" podID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" exitCode=0 Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.847856 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46"} Jan 21 15:56:10 crc kubenswrapper[4902]: I0121 15:56:10.848085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerStarted","Data":"36eba7ca272d5745d020ecf93e963e4f76dcea615b0e4afd3a2fb792e8ede2ff"} Jan 21 15:56:11 crc kubenswrapper[4902]: I0121 15:56:11.858354 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerStarted","Data":"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473"} Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.041351 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.042869 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.045195 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.045464 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.045780 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.050841 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.052277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.053400 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-z96z6" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.054671 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.058486 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.064737 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218252 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218286 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218339 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.218359 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.319689 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320162 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320800 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.320866 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.321543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.321708 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.353762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"dnsmasq-dns-56bbd59dc5-t2t4d\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.353763 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"dnsmasq-dns-5986db9b4f-vw29z\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.365195 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.382853 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.598302 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.659174 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.660630 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.684453 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.734529 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.734767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.734806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.836219 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.836310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.836331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.837523 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.837564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.861024 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"dnsmasq-dns-865d9b578f-zhthq\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.867656 4902 generic.go:334] "Generic (PLEG): container finished" podID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" exitCode=0 Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.867715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473"} Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.910125 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:12 crc kubenswrapper[4902]: I0121 15:56:12.995157 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.084744 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.088644 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.148894 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.150473 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.201676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.348194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.348593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.348625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.375500 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:13 crc kubenswrapper[4902]: W0121 15:56:13.414858 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2fd66a2_371b_44b8_bdd4_b6be36c4093f.slice/crio-d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4 WatchSource:0}: Error finding container d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4: Status 404 returned error can't find the container with id d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.449636 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.449723 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.449761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.450790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.451459 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.472065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"dnsmasq-dns-5d79f765b5-bqqlm\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.516603 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.786359 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.811197 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.833163 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837271 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837432 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837550 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-928bn" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.837886 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.838430 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.838828 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.843021 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.907754 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerStarted","Data":"cd7e0cd801ba79f538e3c63c7aa4f7926d46008854b1879da441818cd04cf0dc"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.911085 4902 generic.go:334] "Generic (PLEG): container finished" podID="30d00674-287c-403a-824a-b276b754f347" containerID="558fbcd6b35082dbaf76c770e098d73f89c3826407d9e535f25c3583444b1ed8" exitCode=0 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.912837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" event={"ID":"30d00674-287c-403a-824a-b276b754f347","Type":"ContainerDied","Data":"558fbcd6b35082dbaf76c770e098d73f89c3826407d9e535f25c3583444b1ed8"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.912936 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" event={"ID":"30d00674-287c-403a-824a-b276b754f347","Type":"ContainerStarted","Data":"94abbe62094a7750b098b45bd14aa3af3bfdaf14f32c54556cd1707199bba1ca"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.921570 4902 generic.go:334] "Generic (PLEG): container finished" podID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerID="0eec98b5d0b0be8d198331d620aaf26c943f2f70750ff630c0d78b7c5a83456c" exitCode=0 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.921665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerDied","Data":"0eec98b5d0b0be8d198331d620aaf26c943f2f70750ff630c0d78b7c5a83456c"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.921702 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerStarted","Data":"d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.942806 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerStarted","Data":"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957840 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957937 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.957995 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958016 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958068 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958128 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958155 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.958197 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.963345 4902 generic.go:334] "Generic (PLEG): container finished" podID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerID="e174e52c2764056538bbb95c67918069f9399591d9ba7544f3fb5d6d28846bd3" exitCode=0 Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.963511 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" event={"ID":"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5","Type":"ContainerDied","Data":"e174e52c2764056538bbb95c67918069f9399591d9ba7544f3fb5d6d28846bd3"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.963552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" event={"ID":"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5","Type":"ContainerStarted","Data":"269383cbad3588e0a0a04dc6eb7ceb2301277087918a863ab8238086c4a80bab"} Jan 21 15:56:13 crc kubenswrapper[4902]: I0121 15:56:13.987566 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-js569" podStartSLOduration=2.504600654 podStartE2EDuration="4.987542045s" podCreationTimestamp="2026-01-21 15:56:09 +0000 UTC" firstStartedPulling="2026-01-21 15:56:10.849679344 +0000 UTC m=+4932.926512373" lastFinishedPulling="2026-01-21 15:56:13.332620725 +0000 UTC m=+4935.409453764" observedRunningTime="2026-01-21 15:56:13.978398657 +0000 UTC m=+4936.055231686" watchObservedRunningTime="2026-01-21 15:56:13.987542045 +0000 UTC m=+4936.064375084" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059426 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059478 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059513 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059564 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059604 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059688 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.059802 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.064936 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.065268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.065950 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.067994 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.074067 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.074665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.075790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.081647 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.082171 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.082216 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ca3246581f7b05cdf38cd2988972c40f4ce4dbd3e3f2637534a551fbe51cdea/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.086520 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.087289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.123877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.154078 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.163814 4902 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 21 15:56:14 crc kubenswrapper[4902]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 15:56:14 crc kubenswrapper[4902]: > podSandboxID="d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.163981 4902 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:56:14 crc kubenswrapper[4902]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmkgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-865d9b578f-zhthq_openstack(d2fd66a2-371b-44b8-bdd4-b6be36c4093f): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 15:56:14 crc kubenswrapper[4902]: > logger="UnhandledError" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.166176 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.273253 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.274558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280417 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280513 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280624 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280844 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.280949 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.281121 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.281230 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.281287 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ssxxh" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.287682 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.319641 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.366981 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") pod \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") pod \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") pod \"30d00674-287c-403a-824a-b276b754f347\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367141 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") pod \"30d00674-287c-403a-824a-b276b754f347\" (UID: \"30d00674-287c-403a-824a-b276b754f347\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367164 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") pod \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\" (UID: \"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5\") " Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.367975 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368019 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368039 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368110 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368154 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368193 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368218 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368232 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368279 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.368301 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.371952 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252" (OuterVolumeSpecName: "kube-api-access-45252") pod "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" (UID: "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5"). InnerVolumeSpecName "kube-api-access-45252". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.375607 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp" (OuterVolumeSpecName: "kube-api-access-wbnzp") pod "30d00674-287c-403a-824a-b276b754f347" (UID: "30d00674-287c-403a-824a-b276b754f347"). InnerVolumeSpecName "kube-api-access-wbnzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.387059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config" (OuterVolumeSpecName: "config") pod "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" (UID: "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.388130 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" (UID: "8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.390398 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config" (OuterVolumeSpecName: "config") pod "30d00674-287c-403a-824a-b276b754f347" (UID: "30d00674-287c-403a-824a-b276b754f347"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469159 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469318 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469382 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469406 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469434 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469531 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469558 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469611 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d00674-287c-403a-824a-b276b754f347-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469625 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbnzp\" (UniqueName: \"kubernetes.io/projected/30d00674-287c-403a-824a-b276b754f347-kube-api-access-wbnzp\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469637 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469649 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45252\" (UniqueName: \"kubernetes.io/projected/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-kube-api-access-45252\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.469660 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.470063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.470612 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.470791 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.472036 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.472507 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.472540 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/044d17188a71d87a2f162043dfcb436253bd0043d87dd6a91403116fc167aa96/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473354 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473792 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.473983 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.476016 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.486630 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.499191 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.606116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:56:14 crc kubenswrapper[4902]: W0121 15:56:14.636069 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6f17a65_e372_463d_b875_c8acdd3a8a04.slice/crio-9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7 WatchSource:0}: Error finding container 9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7: Status 404 returned error can't find the container with id 9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7 Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.638463 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.825941 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.826899 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.826924 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: E0121 15:56:14.826948 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d00674-287c-403a-824a-b276b754f347" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.826955 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d00674-287c-403a-824a-b276b754f347" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.827138 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d00674-287c-403a-824a-b276b754f347" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.827166 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" containerName="init" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.828252 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.836881 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.837327 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zxvnh" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.837429 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.840595 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.843101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.845791 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.972905 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.972897 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-t2t4d" event={"ID":"8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5","Type":"ContainerDied","Data":"269383cbad3588e0a0a04dc6eb7ceb2301277087918a863ab8238086c4a80bab"} Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.973158 4902 scope.go:117] "RemoveContainer" containerID="e174e52c2764056538bbb95c67918069f9399591d9ba7544f3fb5d6d28846bd3" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.975189 4902 generic.go:334] "Generic (PLEG): container finished" podID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerID="d0d1ff36d9c251f2b2fbf7c284bbe148be1cf281267966cd9400c8ef5a5fdfad" exitCode=0 Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.975340 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerDied","Data":"d0d1ff36d9c251f2b2fbf7c284bbe148be1cf281267966cd9400c8ef5a5fdfad"} Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.977656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5cl\" (UniqueName: \"kubernetes.io/projected/a02660d2-21f1-4d0b-9351-efc03413d6f8-kube-api-access-hz5cl\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.977814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.977993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978121 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978267 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978605 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.978659 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.979157 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" event={"ID":"30d00674-287c-403a-824a-b276b754f347","Type":"ContainerDied","Data":"94abbe62094a7750b098b45bd14aa3af3bfdaf14f32c54556cd1707199bba1ca"} Jan 21 15:56:14 crc kubenswrapper[4902]: I0121 15:56:14.979848 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-vw29z" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.003844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerStarted","Data":"9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7"} Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.083012 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.093626 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094721 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094825 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5cl\" (UniqueName: \"kubernetes.io/projected/a02660d2-21f1-4d0b-9351-efc03413d6f8-kube-api-access-hz5cl\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094963 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.094992 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.095052 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.096701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.097889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.098519 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a02660d2-21f1-4d0b-9351-efc03413d6f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.101877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: W0121 15:56:15.107253 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c0907a_0c62_4813_af74_b0f97c0e3c16.slice/crio-823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578 WatchSource:0}: Error finding container 823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578: Status 404 returned error can't find the container with id 823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578 Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.110999 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a02660d2-21f1-4d0b-9351-efc03413d6f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.112224 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02660d2-21f1-4d0b-9351-efc03413d6f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.121404 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5cl\" (UniqueName: \"kubernetes.io/projected/a02660d2-21f1-4d0b-9351-efc03413d6f8-kube-api-access-hz5cl\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.122327 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.122376 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a452236eb5d88b410c04f3c61b2f470566f27d1a0d65069000c34e834ad468d2/globalmount\"" pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.171427 4902 scope.go:117] "RemoveContainer" containerID="558fbcd6b35082dbaf76c770e098d73f89c3826407d9e535f25c3583444b1ed8" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.194164 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ddf73e8-c0ce-4e43-a7d5-8101c6d15663\") pod \"openstack-galera-0\" (UID: \"a02660d2-21f1-4d0b-9351-efc03413d6f8\") " pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.202117 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.207073 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-vw29z"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.240646 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.251762 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-t2t4d"] Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.452173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:56:15 crc kubenswrapper[4902]: I0121 15:56:15.986566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: W0121 15:56:15.997833 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda02660d2_21f1_4d0b_9351_efc03413d6f8.slice/crio-cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539 WatchSource:0}: Error finding container cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539: Status 404 returned error can't find the container with id cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539 Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.014958 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerStarted","Data":"69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.016132 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.028036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerStarted","Data":"df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.028672 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.030945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerStarted","Data":"cb87312ee131e2f24b830e2bd7827493dbfb37c1a3852de93b33b3b7d6a43539"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.038579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerStarted","Data":"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.041540 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" podStartSLOduration=4.041521089 podStartE2EDuration="4.041521089s" podCreationTimestamp="2026-01-21 15:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:16.033562955 +0000 UTC m=+4938.110395984" watchObservedRunningTime="2026-01-21 15:56:16.041521089 +0000 UTC m=+4938.118354118" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.042752 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerStarted","Data":"823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578"} Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.060565 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" podStartSLOduration=3.060548256 podStartE2EDuration="3.060548256s" podCreationTimestamp="2026-01-21 15:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:16.053590529 +0000 UTC m=+4938.130423568" watchObservedRunningTime="2026-01-21 15:56:16.060548256 +0000 UTC m=+4938.137381285" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.283118 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.284608 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.287065 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.287271 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.287332 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.289535 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m7kxs" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.306856 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d00674-287c-403a-824a-b276b754f347" path="/var/lib/kubelet/pods/30d00674-287c-403a-824a-b276b754f347/volumes" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.307889 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5" path="/var/lib/kubelet/pods/8c69fa65-89b0-4a0d-a5dd-6d0b3c0ff6f5/volumes" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.308818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423380 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2nt\" (UniqueName: \"kubernetes.io/projected/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kube-api-access-gt2nt\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423439 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423466 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423541 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423725 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423785 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423875 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.423913 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525736 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525770 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525785 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2nt\" (UniqueName: \"kubernetes.io/projected/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kube-api-access-gt2nt\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525845 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.525900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.526827 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.527325 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.527486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.528163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.531569 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.531826 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f39cebcc9f41876819dcdd0155f6be58c57563341918546311a827df2908cfd/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.531886 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.536160 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.546108 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2nt\" (UniqueName: \"kubernetes.io/projected/a211ebd7-f82f-4cc7-91d3-77ec265a5d11-kube-api-access-gt2nt\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.558540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a688da8-421e-4a07-8548-ad3f7b610235\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a688da8-421e-4a07-8548-ad3f7b610235\") pod \"openstack-cell1-galera-0\" (UID: \"a211ebd7-f82f-4cc7-91d3-77ec265a5d11\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.600908 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.646934 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.647889 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.652012 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9s5nr" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.652447 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.652640 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.665262 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.730833 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kolla-config\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.730912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-combined-ca-bundle\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.730960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwk7\" (UniqueName: \"kubernetes.io/projected/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kube-api-access-9qwk7\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.731004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-config-data\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.731130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-memcached-tls-certs\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-memcached-tls-certs\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832708 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kolla-config\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-combined-ca-bundle\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832779 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qwk7\" (UniqueName: \"kubernetes.io/projected/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kube-api-access-9qwk7\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.832826 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-config-data\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.833784 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kolla-config\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.834146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32eae2d9-5b57-4ae9-8451-fa00bd7be443-config-data\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.836754 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-memcached-tls-certs\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.837806 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32eae2d9-5b57-4ae9-8451-fa00bd7be443-combined-ca-bundle\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.858736 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qwk7\" (UniqueName: \"kubernetes.io/projected/32eae2d9-5b57-4ae9-8451-fa00bd7be443-kube-api-access-9qwk7\") pod \"memcached-0\" (UID: \"32eae2d9-5b57-4ae9-8451-fa00bd7be443\") " pod="openstack/memcached-0" Jan 21 15:56:16 crc kubenswrapper[4902]: I0121 15:56:16.971788 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.062932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.067783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerStarted","Data":"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757"} Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.072527 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerStarted","Data":"d08735009117ed5e41317063f52447415b62dc5b644c6f8391c47548e16f143f"} Jan 21 15:56:17 crc kubenswrapper[4902]: W0121 15:56:17.425379 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32eae2d9_5b57_4ae9_8451_fa00bd7be443.slice/crio-9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd WatchSource:0}: Error finding container 9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd: Status 404 returned error can't find the container with id 9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd Jan 21 15:56:17 crc kubenswrapper[4902]: I0121 15:56:17.427396 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:56:18 crc kubenswrapper[4902]: I0121 15:56:18.089077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerStarted","Data":"d5bb4603423cd8d93efab95c695e74107c0f4c4cb84aa804302e21ed56b1a624"} Jan 21 15:56:18 crc kubenswrapper[4902]: I0121 15:56:18.089389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerStarted","Data":"58ef484cb0df0811e97cf23f2a71589b8f87fe0790ccc5a55ea32683c71203a6"} Jan 21 15:56:18 crc kubenswrapper[4902]: I0121 15:56:18.091221 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"32eae2d9-5b57-4ae9-8451-fa00bd7be443","Type":"ContainerStarted","Data":"9afefa1e769a5d379636b4c6993d140a9c0b1f8c5ac584a16c0fee77b4bb4fcd"} Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.101732 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"32eae2d9-5b57-4ae9-8451-fa00bd7be443","Type":"ContainerStarted","Data":"1b9363ebee1c365ac9cef072f722170cff59ebd8e56ca16a3fa2b4b46f37d173"} Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.101978 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.149175 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.149158228 podStartE2EDuration="3.149158228s" podCreationTimestamp="2026-01-21 15:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:19.124799042 +0000 UTC m=+4941.201632071" watchObservedRunningTime="2026-01-21 15:56:19.149158228 +0000 UTC m=+4941.225991257" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.937674 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.937731 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:19 crc kubenswrapper[4902]: I0121 15:56:19.974071 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:20 crc kubenswrapper[4902]: I0121 15:56:20.158810 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:20 crc kubenswrapper[4902]: I0121 15:56:20.208134 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:21 crc kubenswrapper[4902]: I0121 15:56:21.122384 4902 generic.go:334] "Generic (PLEG): container finished" podID="a02660d2-21f1-4d0b-9351-efc03413d6f8" containerID="d08735009117ed5e41317063f52447415b62dc5b644c6f8391c47548e16f143f" exitCode=0 Jan 21 15:56:21 crc kubenswrapper[4902]: I0121 15:56:21.122992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerDied","Data":"d08735009117ed5e41317063f52447415b62dc5b644c6f8391c47548e16f143f"} Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.133819 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a02660d2-21f1-4d0b-9351-efc03413d6f8","Type":"ContainerStarted","Data":"f5ffef6a5b71b1522eea9137dcf815e4b0a7f5d6af3716783afce880f81f2ba4"} Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.134074 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-js569" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" containerID="cri-o://8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" gracePeriod=2 Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.167403 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.167381145 podStartE2EDuration="9.167381145s" podCreationTimestamp="2026-01-21 15:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:22.153838834 +0000 UTC m=+4944.230671853" watchObservedRunningTime="2026-01-21 15:56:22.167381145 +0000 UTC m=+4944.244214164" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.586746 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.729327 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") pod \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.729729 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") pod \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.729765 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") pod \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\" (UID: \"a7e81ecf-2d0f-42ee-b056-8dcee4744f20\") " Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.730698 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities" (OuterVolumeSpecName: "utilities") pod "a7e81ecf-2d0f-42ee-b056-8dcee4744f20" (UID: "a7e81ecf-2d0f-42ee-b056-8dcee4744f20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.737851 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d" (OuterVolumeSpecName: "kube-api-access-hd24d") pod "a7e81ecf-2d0f-42ee-b056-8dcee4744f20" (UID: "a7e81ecf-2d0f-42ee-b056-8dcee4744f20"). InnerVolumeSpecName "kube-api-access-hd24d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.842849 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd24d\" (UniqueName: \"kubernetes.io/projected/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-kube-api-access-hd24d\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.842904 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:22 crc kubenswrapper[4902]: I0121 15:56:22.996415 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142597 4902 generic.go:334] "Generic (PLEG): container finished" podID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" exitCode=0 Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049"} Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142684 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-js569" event={"ID":"a7e81ecf-2d0f-42ee-b056-8dcee4744f20","Type":"ContainerDied","Data":"36eba7ca272d5745d020ecf93e963e4f76dcea615b0e4afd3a2fb792e8ede2ff"} Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142695 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-js569" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.142707 4902 scope.go:117] "RemoveContainer" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.158332 4902 scope.go:117] "RemoveContainer" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.174716 4902 scope.go:117] "RemoveContainer" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.200455 4902 scope.go:117] "RemoveContainer" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" Jan 21 15:56:23 crc kubenswrapper[4902]: E0121 15:56:23.200882 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049\": container with ID starting with 8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049 not found: ID does not exist" containerID="8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.200921 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049"} err="failed to get container status \"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049\": rpc error: code = NotFound desc = could not find container \"8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049\": container with ID starting with 8655fbce3b81fd6737fc9802e82ac2cca2f9b1ec86a66a900a49245686c70049 not found: ID does not exist" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.200972 4902 scope.go:117] "RemoveContainer" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" Jan 21 15:56:23 crc kubenswrapper[4902]: E0121 15:56:23.201373 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473\": container with ID starting with c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473 not found: ID does not exist" containerID="c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.201410 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473"} err="failed to get container status \"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473\": rpc error: code = NotFound desc = could not find container \"c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473\": container with ID starting with c2ac7d265daa5a076911e3ce19a8be39a14dbabcb6e40e288631dc7abfaee473 not found: ID does not exist" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.201434 4902 scope.go:117] "RemoveContainer" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" Jan 21 15:56:23 crc kubenswrapper[4902]: E0121 15:56:23.201783 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46\": container with ID starting with 45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46 not found: ID does not exist" containerID="45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.201820 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46"} err="failed to get container status \"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46\": rpc error: code = NotFound desc = could not find container \"45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46\": container with ID starting with 45eafb51fadf190387bd010aa4ae4e9a6224092300f2137c53a9ae4ccbd5cb46 not found: ID does not exist" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.518967 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.580868 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:23 crc kubenswrapper[4902]: I0121 15:56:23.582375 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" containerID="cri-o://69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7" gracePeriod=10 Jan 21 15:56:24 crc kubenswrapper[4902]: I0121 15:56:24.153754 4902 generic.go:334] "Generic (PLEG): container finished" podID="a211ebd7-f82f-4cc7-91d3-77ec265a5d11" containerID="d5bb4603423cd8d93efab95c695e74107c0f4c4cb84aa804302e21ed56b1a624" exitCode=0 Jan 21 15:56:24 crc kubenswrapper[4902]: I0121 15:56:24.153804 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerDied","Data":"d5bb4603423cd8d93efab95c695e74107c0f4c4cb84aa804302e21ed56b1a624"} Jan 21 15:56:24 crc kubenswrapper[4902]: I0121 15:56:24.993793 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7e81ecf-2d0f-42ee-b056-8dcee4744f20" (UID: "a7e81ecf-2d0f-42ee-b056-8dcee4744f20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.077581 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e81ecf-2d0f-42ee-b056-8dcee4744f20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.163873 4902 generic.go:334] "Generic (PLEG): container finished" podID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerID="69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7" exitCode=0 Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.163951 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerDied","Data":"69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7"} Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.166118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a211ebd7-f82f-4cc7-91d3-77ec265a5d11","Type":"ContainerStarted","Data":"2354a29fe6856435345d22b0bc640801d3d7413f19461a21bc3f276792b7ec26"} Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.188996 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.188976887 podStartE2EDuration="10.188976887s" podCreationTimestamp="2026-01-21 15:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:25.183727879 +0000 UTC m=+4947.260560908" watchObservedRunningTime="2026-01-21 15:56:25.188976887 +0000 UTC m=+4947.265809916" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.269732 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.292285 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-js569"] Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.453688 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.454222 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 15:56:25 crc kubenswrapper[4902]: E0121 15:56:25.753882 4902 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:58802->38.129.56.21:44701: write tcp 38.129.56.21:58802->38.129.56.21:44701: write: broken pipe Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.866485 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.990451 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") pod \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.990641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") pod \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " Jan 21 15:56:25 crc kubenswrapper[4902]: I0121 15:56:25.990676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") pod \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\" (UID: \"d2fd66a2-371b-44b8-bdd4-b6be36c4093f\") " Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.003254 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm" (OuterVolumeSpecName: "kube-api-access-mmkgm") pod "d2fd66a2-371b-44b8-bdd4-b6be36c4093f" (UID: "d2fd66a2-371b-44b8-bdd4-b6be36c4093f"). InnerVolumeSpecName "kube-api-access-mmkgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.023348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config" (OuterVolumeSpecName: "config") pod "d2fd66a2-371b-44b8-bdd4-b6be36c4093f" (UID: "d2fd66a2-371b-44b8-bdd4-b6be36c4093f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.032203 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2fd66a2-371b-44b8-bdd4-b6be36c4093f" (UID: "d2fd66a2-371b-44b8-bdd4-b6be36c4093f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.092737 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmkgm\" (UniqueName: \"kubernetes.io/projected/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-kube-api-access-mmkgm\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.092791 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.092804 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fd66a2-371b-44b8-bdd4-b6be36c4093f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.175856 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" event={"ID":"d2fd66a2-371b-44b8-bdd4-b6be36c4093f","Type":"ContainerDied","Data":"d3e469d97419e7030a7fc682f2e583305b06427dd0e454f49cfc9cbeb69e90f4"} Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.175921 4902 scope.go:117] "RemoveContainer" containerID="69071f8a5cdc3de0675cee96512e97a433df1c9cd0588a392299c704d9d943f7" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.176257 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-zhthq" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.204929 4902 scope.go:117] "RemoveContainer" containerID="0eec98b5d0b0be8d198331d620aaf26c943f2f70750ff630c0d78b7c5a83456c" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.209540 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.215908 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-zhthq"] Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.303499 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" path="/var/lib/kubelet/pods/a7e81ecf-2d0f-42ee-b056-8dcee4744f20/volumes" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.304281 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" path="/var/lib/kubelet/pods/d2fd66a2-371b-44b8-bdd4-b6be36c4093f/volumes" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.602440 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.602789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:26 crc kubenswrapper[4902]: I0121 15:56:26.974615 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 15:56:29 crc kubenswrapper[4902]: I0121 15:56:29.149262 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:29 crc kubenswrapper[4902]: I0121 15:56:29.244146 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 15:56:30 crc kubenswrapper[4902]: I0121 15:56:30.052574 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 15:56:30 crc kubenswrapper[4902]: I0121 15:56:30.134540 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.752961 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753480 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753493 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753501 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753507 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753524 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-utilities" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753530 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-utilities" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753539 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="init" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753545 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="init" Jan 21 15:56:33 crc kubenswrapper[4902]: E0121 15:56:33.753560 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-content" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753566 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="extract-content" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753688 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fd66a2-371b-44b8-bdd4-b6be36c4093f" containerName="dnsmasq-dns" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.753714 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e81ecf-2d0f-42ee-b056-8dcee4744f20" containerName="registry-server" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.754301 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.756780 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.764258 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.814686 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.814984 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.916018 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.916471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.917526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:33 crc kubenswrapper[4902]: I0121 15:56:33.935809 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"root-account-create-update-p5grh\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:34 crc kubenswrapper[4902]: I0121 15:56:34.090074 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:34 crc kubenswrapper[4902]: I0121 15:56:34.516884 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:35 crc kubenswrapper[4902]: I0121 15:56:35.244392 4902 generic.go:334] "Generic (PLEG): container finished" podID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerID="22e6c4bda8a7a16db8551cc07c7e5779cb515519925f610dd7708c60c5c8a6fc" exitCode=0 Jan 21 15:56:35 crc kubenswrapper[4902]: I0121 15:56:35.244544 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p5grh" event={"ID":"53228908-4e69-4bbf-a0ed-aaf5a64f5443","Type":"ContainerDied","Data":"22e6c4bda8a7a16db8551cc07c7e5779cb515519925f610dd7708c60c5c8a6fc"} Jan 21 15:56:35 crc kubenswrapper[4902]: I0121 15:56:35.244696 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p5grh" event={"ID":"53228908-4e69-4bbf-a0ed-aaf5a64f5443","Type":"ContainerStarted","Data":"ed9272b6ec563938aa1aec284408bccdc33dd19498aacbe257717214ea1e4967"} Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.569021 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.657894 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") pod \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.658083 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") pod \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\" (UID: \"53228908-4e69-4bbf-a0ed-aaf5a64f5443\") " Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.658608 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53228908-4e69-4bbf-a0ed-aaf5a64f5443" (UID: "53228908-4e69-4bbf-a0ed-aaf5a64f5443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.664150 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p" (OuterVolumeSpecName: "kube-api-access-6lb8p") pod "53228908-4e69-4bbf-a0ed-aaf5a64f5443" (UID: "53228908-4e69-4bbf-a0ed-aaf5a64f5443"). InnerVolumeSpecName "kube-api-access-6lb8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.759453 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lb8p\" (UniqueName: \"kubernetes.io/projected/53228908-4e69-4bbf-a0ed-aaf5a64f5443-kube-api-access-6lb8p\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:36 crc kubenswrapper[4902]: I0121 15:56:36.759497 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53228908-4e69-4bbf-a0ed-aaf5a64f5443-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:37 crc kubenswrapper[4902]: I0121 15:56:37.267956 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p5grh" event={"ID":"53228908-4e69-4bbf-a0ed-aaf5a64f5443","Type":"ContainerDied","Data":"ed9272b6ec563938aa1aec284408bccdc33dd19498aacbe257717214ea1e4967"} Jan 21 15:56:37 crc kubenswrapper[4902]: I0121 15:56:37.268013 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9272b6ec563938aa1aec284408bccdc33dd19498aacbe257717214ea1e4967" Jan 21 15:56:37 crc kubenswrapper[4902]: I0121 15:56:37.268012 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p5grh" Jan 21 15:56:40 crc kubenswrapper[4902]: I0121 15:56:40.203332 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:40 crc kubenswrapper[4902]: I0121 15:56:40.209029 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p5grh"] Jan 21 15:56:40 crc kubenswrapper[4902]: I0121 15:56:40.304652 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" path="/var/lib/kubelet/pods/53228908-4e69-4bbf-a0ed-aaf5a64f5443/volumes" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.219949 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 15:56:45 crc kubenswrapper[4902]: E0121 15:56:45.220673 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerName="mariadb-account-create-update" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.220687 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerName="mariadb-account-create-update" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.220815 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="53228908-4e69-4bbf-a0ed-aaf5a64f5443" containerName="mariadb-account-create-update" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.221383 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.223911 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.229478 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.286101 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.286310 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.387838 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.387912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.389181 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.412082 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"root-account-create-update-qc8ct\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:45 crc kubenswrapper[4902]: I0121 15:56:45.540332 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:46 crc kubenswrapper[4902]: I0121 15:56:45.966158 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 15:56:46 crc kubenswrapper[4902]: W0121 15:56:45.985330 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd642b708_8313_4edd_8183_4dcd679721b6.slice/crio-e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3 WatchSource:0}: Error finding container e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3: Status 404 returned error can't find the container with id e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3 Jan 21 15:56:46 crc kubenswrapper[4902]: I0121 15:56:46.334142 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerStarted","Data":"2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2"} Jan 21 15:56:46 crc kubenswrapper[4902]: I0121 15:56:46.334489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerStarted","Data":"e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3"} Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.342739 4902 generic.go:334] "Generic (PLEG): container finished" podID="d642b708-8313-4edd-8183-4dcd679721b6" containerID="2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2" exitCode=0 Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.342783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerDied","Data":"2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2"} Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.690725 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.825298 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") pod \"d642b708-8313-4edd-8183-4dcd679721b6\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.826108 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d642b708-8313-4edd-8183-4dcd679721b6" (UID: "d642b708-8313-4edd-8183-4dcd679721b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.826216 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") pod \"d642b708-8313-4edd-8183-4dcd679721b6\" (UID: \"d642b708-8313-4edd-8183-4dcd679721b6\") " Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.826705 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d642b708-8313-4edd-8183-4dcd679721b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.831696 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6" (OuterVolumeSpecName: "kube-api-access-df8b6") pod "d642b708-8313-4edd-8183-4dcd679721b6" (UID: "d642b708-8313-4edd-8183-4dcd679721b6"). InnerVolumeSpecName "kube-api-access-df8b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:56:47 crc kubenswrapper[4902]: I0121 15:56:47.928669 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df8b6\" (UniqueName: \"kubernetes.io/projected/d642b708-8313-4edd-8183-4dcd679721b6-kube-api-access-df8b6\") on node \"crc\" DevicePath \"\"" Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.351383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qc8ct" event={"ID":"d642b708-8313-4edd-8183-4dcd679721b6","Type":"ContainerDied","Data":"e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3"} Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.352251 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82c3b0b36a1d38d468ed2cce701ef20f23ba2ba7ea1e628023569acec0027f3" Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.352368 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qc8ct" Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.354497 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" exitCode=0 Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.354545 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerDied","Data":"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28"} Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.358154 4902 generic.go:334] "Generic (PLEG): container finished" podID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" exitCode=0 Jan 21 15:56:48 crc kubenswrapper[4902]: I0121 15:56:48.358218 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerDied","Data":"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757"} Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.368848 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerStarted","Data":"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa"} Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.369649 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.371179 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerStarted","Data":"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa"} Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.371447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.400504 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.400478326 podStartE2EDuration="37.400478326s" podCreationTimestamp="2026-01-21 15:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:49.393988773 +0000 UTC m=+4971.470821842" watchObservedRunningTime="2026-01-21 15:56:49.400478326 +0000 UTC m=+4971.477311365" Jan 21 15:56:49 crc kubenswrapper[4902]: I0121 15:56:49.422030 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.422006333 podStartE2EDuration="36.422006333s" podCreationTimestamp="2026-01-21 15:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:56:49.416605941 +0000 UTC m=+4971.493438980" watchObservedRunningTime="2026-01-21 15:56:49.422006333 +0000 UTC m=+4971.498839362" Jan 21 15:57:04 crc kubenswrapper[4902]: I0121 15:57:04.159726 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:04 crc kubenswrapper[4902]: I0121 15:57:04.609231 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.909147 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 15:57:08 crc kubenswrapper[4902]: E0121 15:57:08.909854 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d642b708-8313-4edd-8183-4dcd679721b6" containerName="mariadb-account-create-update" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.909869 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d642b708-8313-4edd-8183-4dcd679721b6" containerName="mariadb-account-create-update" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.910028 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d642b708-8313-4edd-8183-4dcd679721b6" containerName="mariadb-account-create-update" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.914304 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.918897 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.944099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.944171 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:08 crc kubenswrapper[4902]: I0121 15:57:08.944227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.045252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.045317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.045368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.046329 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.046330 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.069308 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"dnsmasq-dns-699964fbc-tv7h5\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.235935 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.622817 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:09 crc kubenswrapper[4902]: I0121 15:57:09.695409 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 15:57:10 crc kubenswrapper[4902]: I0121 15:57:10.157878 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:10 crc kubenswrapper[4902]: I0121 15:57:10.534658 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerStarted","Data":"55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf"} Jan 21 15:57:10 crc kubenswrapper[4902]: I0121 15:57:10.534699 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerStarted","Data":"44ae670f7eef2a4f69445e5a528bd2462006fde1e2ee9d0bfd1314e5b3fef469"} Jan 21 15:57:11 crc kubenswrapper[4902]: I0121 15:57:11.545377 4902 generic.go:334] "Generic (PLEG): container finished" podID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerID="55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf" exitCode=0 Jan 21 15:57:11 crc kubenswrapper[4902]: I0121 15:57:11.545436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerDied","Data":"55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf"} Jan 21 15:57:12 crc kubenswrapper[4902]: I0121 15:57:12.555967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerStarted","Data":"dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133"} Jan 21 15:57:12 crc kubenswrapper[4902]: I0121 15:57:12.556176 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:12 crc kubenswrapper[4902]: I0121 15:57:12.580001 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" podStartSLOduration=4.5799800059999995 podStartE2EDuration="4.579980006s" podCreationTimestamp="2026-01-21 15:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:12.579085381 +0000 UTC m=+4994.655918420" watchObservedRunningTime="2026-01-21 15:57:12.579980006 +0000 UTC m=+4994.656813045" Jan 21 15:57:13 crc kubenswrapper[4902]: I0121 15:57:13.459486 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" containerID="cri-o://0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" gracePeriod=604797 Jan 21 15:57:14 crc kubenswrapper[4902]: I0121 15:57:14.475254 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" containerID="cri-o://d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" gracePeriod=604796 Jan 21 15:57:14 crc kubenswrapper[4902]: I0121 15:57:14.607156 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5671: connect: connection refused" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.237827 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.303090 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.303437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" containerID="cri-o://df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d" gracePeriod=10 Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.616275 4902 generic.go:334] "Generic (PLEG): container finished" podID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerID="df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d" exitCode=0 Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.616378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerDied","Data":"df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d"} Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.737544 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.814927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") pod \"09f238e8-eb6e-47ac-818b-3558f9f6a841\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.815133 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") pod \"09f238e8-eb6e-47ac-818b-3558f9f6a841\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.815226 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") pod \"09f238e8-eb6e-47ac-818b-3558f9f6a841\" (UID: \"09f238e8-eb6e-47ac-818b-3558f9f6a841\") " Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.820338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb" (OuterVolumeSpecName: "kube-api-access-nrmzb") pod "09f238e8-eb6e-47ac-818b-3558f9f6a841" (UID: "09f238e8-eb6e-47ac-818b-3558f9f6a841"). InnerVolumeSpecName "kube-api-access-nrmzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.849987 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09f238e8-eb6e-47ac-818b-3558f9f6a841" (UID: "09f238e8-eb6e-47ac-818b-3558f9f6a841"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.856006 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config" (OuterVolumeSpecName: "config") pod "09f238e8-eb6e-47ac-818b-3558f9f6a841" (UID: "09f238e8-eb6e-47ac-818b-3558f9f6a841"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.916615 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.916650 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrmzb\" (UniqueName: \"kubernetes.io/projected/09f238e8-eb6e-47ac-818b-3558f9f6a841-kube-api-access-nrmzb\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:19 crc kubenswrapper[4902]: I0121 15:57:19.916660 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f238e8-eb6e-47ac-818b-3558f9f6a841-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.005094 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018232 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018299 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018345 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018472 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018534 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018742 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018790 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018823 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") pod \"53c0907a-0c62-4813-af74-b0f97c0e3c16\" (UID: \"53c0907a-0c62-4813-af74-b0f97c0e3c16\") " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.018680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.019766 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.020467 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.021564 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info" (OuterVolumeSpecName: "pod-info") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.023945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.027212 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.028403 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw" (OuterVolumeSpecName: "kube-api-access-xjczw") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "kube-api-access-xjczw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.032322 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f" (OuterVolumeSpecName: "persistence") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.045776 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data" (OuterVolumeSpecName: "config-data") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.081974 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf" (OuterVolumeSpecName: "server-conf") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120344 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120387 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120397 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120405 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120413 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/53c0907a-0c62-4813-af74-b0f97c0e3c16-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120423 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/53c0907a-0c62-4813-af74-b0f97c0e3c16-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120510 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") on node \"crc\" " Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120526 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120540 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjczw\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-kube-api-access-xjczw\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.120548 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/53c0907a-0c62-4813-af74-b0f97c0e3c16-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.138546 4902 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.138724 4902 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f") on node "crc" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.145280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "53c0907a-0c62-4813-af74-b0f97c0e3c16" (UID: "53c0907a-0c62-4813-af74-b0f97c0e3c16"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.221924 4902 reconciler_common.go:293] "Volume detached for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.221972 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/53c0907a-0c62-4813-af74-b0f97c0e3c16-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630119 4902 generic.go:334] "Generic (PLEG): container finished" podID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" exitCode=0 Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630272 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerDied","Data":"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa"} Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630356 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"53c0907a-0c62-4813-af74-b0f97c0e3c16","Type":"ContainerDied","Data":"823499dcc6be68200313f9990f3b406f719dc54b8a2e736053275316c037d578"} Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630275 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.630404 4902 scope.go:117] "RemoveContainer" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.636340 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" event={"ID":"09f238e8-eb6e-47ac-818b-3558f9f6a841","Type":"ContainerDied","Data":"cd7e0cd801ba79f538e3c63c7aa4f7926d46008854b1879da441818cd04cf0dc"} Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.636447 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-bqqlm" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.662743 4902 scope.go:117] "RemoveContainer" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.665841 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.672255 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.692667 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.703896 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-bqqlm"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708189 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708480 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708495 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708511 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="init" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708517 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="init" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708534 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="setup-container" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708542 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="setup-container" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.708571 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.708578 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.727168 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" containerName="rabbitmq" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.727212 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" containerName="dnsmasq-dns" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.735629 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.735817 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.742431 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.742879 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ssxxh" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.742891 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.743670 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.743728 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.743753 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.744832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.794555 4902 scope.go:117] "RemoveContainer" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.794918 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa\": container with ID starting with 0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa not found: ID does not exist" containerID="0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.794953 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa"} err="failed to get container status \"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa\": rpc error: code = NotFound desc = could not find container \"0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa\": container with ID starting with 0de455648b051d5caf7405ab2034ee891cfa156c6fe7f4ac840b9c45b4bc8baa not found: ID does not exist" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.794975 4902 scope.go:117] "RemoveContainer" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" Jan 21 15:57:20 crc kubenswrapper[4902]: E0121 15:57:20.795283 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757\": container with ID starting with f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757 not found: ID does not exist" containerID="f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.795305 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757"} err="failed to get container status \"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757\": rpc error: code = NotFound desc = could not find container \"f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757\": container with ID starting with f8b75d54a767665b098236f966e923a1937e74120339bc2fce010451c54f7757 not found: ID does not exist" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.795317 4902 scope.go:117] "RemoveContainer" containerID="df1c8824638373bd513fe569a7cfc99ba5575ba306170d90a24a3e259265c66d" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.819335 4902 scope.go:117] "RemoveContainer" containerID="d0d1ff36d9c251f2b2fbf7c284bbe148be1cf281267966cd9400c8ef5a5fdfad" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832439 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832735 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832794 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832832 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832910 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.832978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.833016 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.833073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.833099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhtgd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-kube-api-access-zhtgd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933605 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933626 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933657 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933760 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933785 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhtgd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-kube-api-access-zhtgd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.933841 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.934969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.935299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.935821 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.936519 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.936916 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-config-data\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.938602 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.938632 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/044d17188a71d87a2f162043dfcb436253bd0043d87dd6a91403116fc167aa96/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.939562 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.940368 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.940691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.941364 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.968758 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhtgd\" (UniqueName: \"kubernetes.io/projected/7f24aaa5-50e0-4e80-ba28-3fa2b770fac8-kube-api-access-zhtgd\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:20 crc kubenswrapper[4902]: I0121 15:57:20.972442 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-addf4df9-b4f1-4fd6-b60b-45916361d58f\") pod \"rabbitmq-server-0\" (UID: \"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8\") " pod="openstack/rabbitmq-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.068391 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.123556 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136278 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136314 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136337 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136358 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136509 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136542 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136641 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136742 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136809 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.136829 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.137074 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.137431 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.141769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.142391 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.142799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info" (OuterVolumeSpecName: "pod-info") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.150880 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z" (OuterVolumeSpecName: "kube-api-access-7ct2z") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "kube-api-access-7ct2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.164390 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.164603 4902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560 podName:c6f17a65-e372-463d-b875-c8acdd3a8a04 nodeName:}" failed. No retries permitted until 2026-01-21 15:57:21.664581313 +0000 UTC m=+5003.741414342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.174440 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data" (OuterVolumeSpecName: "config-data") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.197905 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf" (OuterVolumeSpecName: "server-conf") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238905 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238933 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238945 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ct2z\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-kube-api-access-7ct2z\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238955 4902 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238963 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238971 4902 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c6f17a65-e372-463d-b875-c8acdd3a8a04-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238979 4902 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c6f17a65-e372-463d-b875-c8acdd3a8a04-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238987 4902 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c6f17a65-e372-463d-b875-c8acdd3a8a04-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.238994 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.248306 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.340413 4902 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c6f17a65-e372-463d-b875-c8acdd3a8a04-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.510937 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.647455 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerStarted","Data":"3417b2353d6f946f9428306ce572a32bbc9d237d4953d50947d70635e14f3289"} Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650379 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" exitCode=0 Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerDied","Data":"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa"} Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650465 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c6f17a65-e372-463d-b875-c8acdd3a8a04","Type":"ContainerDied","Data":"9d7549ef3e343170f623b1703d13ef1cc7e5adec835d42203926b1f5605c69d7"} Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650466 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.650489 4902 scope.go:117] "RemoveContainer" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.666235 4902 scope.go:117] "RemoveContainer" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.681541 4902 scope.go:117] "RemoveContainer" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.681964 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa\": container with ID starting with d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa not found: ID does not exist" containerID="d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.681998 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa"} err="failed to get container status \"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa\": rpc error: code = NotFound desc = could not find container \"d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa\": container with ID starting with d0fa46c39bb2bed4ead8b6781ee0080d5294cdb28ba5e654231d91ab5d938efa not found: ID does not exist" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.682023 4902 scope.go:117] "RemoveContainer" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.682388 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28\": container with ID starting with 56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28 not found: ID does not exist" containerID="56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.682432 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28"} err="failed to get container status \"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28\": rpc error: code = NotFound desc = could not find container \"56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28\": container with ID starting with 56fede2a049d35557b9b5addf25353c93e8b313f6e9712ce4b830ecec8079d28 not found: ID does not exist" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.746017 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"c6f17a65-e372-463d-b875-c8acdd3a8a04\" (UID: \"c6f17a65-e372-463d-b875-c8acdd3a8a04\") " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.759119 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560" (OuterVolumeSpecName: "persistence") pod "c6f17a65-e372-463d-b875-c8acdd3a8a04" (UID: "c6f17a65-e372-463d-b875-c8acdd3a8a04"). InnerVolumeSpecName "pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.848353 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") on node \"crc\" " Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.868215 4902 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.868375 4902 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560") on node "crc" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.888872 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.913179 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917395 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.917727 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917743 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" Jan 21 15:57:21 crc kubenswrapper[4902]: E0121 15:57:21.917763 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="setup-container" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917769 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="setup-container" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.917954 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" containerName="rabbitmq" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.921880 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.926785 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.927569 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.927648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928123 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928325 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928481 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-928bn" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928600 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.928814 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 15:57:21 crc kubenswrapper[4902]: I0121 15:57:21.949763 4902 reconciler_common.go:293] "Volume detached for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.051949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052029 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052111 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052145 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqw4\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-kube-api-access-lzqw4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052534 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052637 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.052745 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154405 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154515 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154596 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.154649 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqw4\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-kube-api-access-lzqw4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.155497 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.156582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.157579 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.157727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.158329 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.159539 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.159615 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ca3246581f7b05cdf38cd2988972c40f4ce4dbd3e3f2637534a551fbe51cdea/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.160704 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.163146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.163453 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.163289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.178013 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqw4\" (UniqueName: \"kubernetes.io/projected/e0bcf8cd-3dd9-409b-84d9-693f7e471fc1-kube-api-access-lzqw4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.195518 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1eb0c5cf-da5d-460c-8e85-b920a3497560\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.248289 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.307650 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f238e8-eb6e-47ac-818b-3558f9f6a841" path="/var/lib/kubelet/pods/09f238e8-eb6e-47ac-818b-3558f9f6a841/volumes" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.309354 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c0907a-0c62-4813-af74-b0f97c0e3c16" path="/var/lib/kubelet/pods/53c0907a-0c62-4813-af74-b0f97c0e3c16/volumes" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.312020 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f17a65-e372-463d-b875-c8acdd3a8a04" path="/var/lib/kubelet/pods/c6f17a65-e372-463d-b875-c8acdd3a8a04/volumes" Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.488559 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:57:22 crc kubenswrapper[4902]: W0121 15:57:22.492658 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bcf8cd_3dd9_409b_84d9_693f7e471fc1.slice/crio-bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a WatchSource:0}: Error finding container bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a: Status 404 returned error can't find the container with id bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a Jan 21 15:57:22 crc kubenswrapper[4902]: I0121 15:57:22.658292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerStarted","Data":"bce62c7a0fb48ec041cdccaf25ff949feab0524109c1047b0cc3bd9605f3b21a"} Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.676562 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerStarted","Data":"ed2ed34b6a745712f048be04ebe104f3fd28e32858c6ad02778421c757dbe1a2"} Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.821117 4902 scope.go:117] "RemoveContainer" containerID="832bdc2244fcacc08faf09f474999607b365e44c63c97c499a7f0ae90cc52a03" Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.840200 4902 scope.go:117] "RemoveContainer" containerID="3fd7269ed4af2b5ed8789200b615c63fc1a7f708f657559905419462e7af7de1" Jan 21 15:57:23 crc kubenswrapper[4902]: I0121 15:57:23.880330 4902 scope.go:117] "RemoveContainer" containerID="01b1e3385b91a0ac713735c08ca6d5002c8c460c4cfa3d2e686ace79189fad0a" Jan 21 15:57:24 crc kubenswrapper[4902]: I0121 15:57:24.687504 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerStarted","Data":"192d4807b0e153ac1c718bb1c38de8050845e282eff52bf087f9f2ae6f85ee8f"} Jan 21 15:57:47 crc kubenswrapper[4902]: I0121 15:57:47.770182 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:57:47 crc kubenswrapper[4902]: I0121 15:57:47.770659 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.929625 4902 generic.go:334] "Generic (PLEG): container finished" podID="e0bcf8cd-3dd9-409b-84d9-693f7e471fc1" containerID="192d4807b0e153ac1c718bb1c38de8050845e282eff52bf087f9f2ae6f85ee8f" exitCode=0 Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.929723 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerDied","Data":"192d4807b0e153ac1c718bb1c38de8050845e282eff52bf087f9f2ae6f85ee8f"} Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.933919 4902 generic.go:334] "Generic (PLEG): container finished" podID="7f24aaa5-50e0-4e80-ba28-3fa2b770fac8" containerID="ed2ed34b6a745712f048be04ebe104f3fd28e32858c6ad02778421c757dbe1a2" exitCode=0 Jan 21 15:57:55 crc kubenswrapper[4902]: I0121 15:57:55.933991 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerDied","Data":"ed2ed34b6a745712f048be04ebe104f3fd28e32858c6ad02778421c757dbe1a2"} Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.942706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7f24aaa5-50e0-4e80-ba28-3fa2b770fac8","Type":"ContainerStarted","Data":"d65a409ab929e9f371da19394ebc425f7079289f9b0fedcb69a0c3c57e8982b7"} Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.943276 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.945690 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0bcf8cd-3dd9-409b-84d9-693f7e471fc1","Type":"ContainerStarted","Data":"6e3b57ec46142cef2394ad1fbbb883607474de0aaecec18122bbd60fbe9f25ce"} Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.945950 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.971746 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.971728667 podStartE2EDuration="36.971728667s" podCreationTimestamp="2026-01-21 15:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:56.965052488 +0000 UTC m=+5039.041885517" watchObservedRunningTime="2026-01-21 15:57:56.971728667 +0000 UTC m=+5039.048561696" Jan 21 15:57:56 crc kubenswrapper[4902]: I0121 15:57:56.995100 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.995078317 podStartE2EDuration="35.995078317s" podCreationTimestamp="2026-01-21 15:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:56.986735381 +0000 UTC m=+5039.063568400" watchObservedRunningTime="2026-01-21 15:57:56.995078317 +0000 UTC m=+5039.071911346" Jan 21 15:58:11 crc kubenswrapper[4902]: I0121 15:58:11.071213 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:58:12 crc kubenswrapper[4902]: I0121 15:58:12.251271 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.014713 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.016016 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.018511 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8tnn5" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.025403 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.105617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"mariadb-client\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.206651 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"mariadb-client\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.224745 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"mariadb-client\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.344551 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.837324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:16 crc kubenswrapper[4902]: I0121 15:58:16.842635 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:58:17 crc kubenswrapper[4902]: I0121 15:58:17.097663 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerStarted","Data":"c90866c136c366d5f870ba48955662b0f66c8e0794cf907765a3b7080400e2fc"} Jan 21 15:58:17 crc kubenswrapper[4902]: I0121 15:58:17.769708 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:58:17 crc kubenswrapper[4902]: I0121 15:58:17.770002 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:58:18 crc kubenswrapper[4902]: I0121 15:58:18.106619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerStarted","Data":"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad"} Jan 21 15:58:18 crc kubenswrapper[4902]: I0121 15:58:18.118613 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.593623713 podStartE2EDuration="2.118592023s" podCreationTimestamp="2026-01-21 15:58:16 +0000 UTC" firstStartedPulling="2026-01-21 15:58:16.842443022 +0000 UTC m=+5058.919276051" lastFinishedPulling="2026-01-21 15:58:17.367411332 +0000 UTC m=+5059.444244361" observedRunningTime="2026-01-21 15:58:18.116652878 +0000 UTC m=+5060.193485907" watchObservedRunningTime="2026-01-21 15:58:18.118592023 +0000 UTC m=+5060.195425052" Jan 21 15:58:30 crc kubenswrapper[4902]: I0121 15:58:30.472253 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:30 crc kubenswrapper[4902]: I0121 15:58:30.473121 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" containerID="cri-o://ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" gracePeriod=30 Jan 21 15:58:30 crc kubenswrapper[4902]: I0121 15:58:30.962067 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.036858 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") pod \"1f924640-2d46-4126-b047-2d3e65c3da76\" (UID: \"1f924640-2d46-4126-b047-2d3e65c3da76\") " Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.042520 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk" (OuterVolumeSpecName: "kube-api-access-pp6kk") pod "1f924640-2d46-4126-b047-2d3e65c3da76" (UID: "1f924640-2d46-4126-b047-2d3e65c3da76"). InnerVolumeSpecName "kube-api-access-pp6kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.138888 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp6kk\" (UniqueName: \"kubernetes.io/projected/1f924640-2d46-4126-b047-2d3e65c3da76-kube-api-access-pp6kk\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216252 4902 generic.go:334] "Generic (PLEG): container finished" podID="1f924640-2d46-4126-b047-2d3e65c3da76" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" exitCode=143 Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216305 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerDied","Data":"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad"} Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216337 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"1f924640-2d46-4126-b047-2d3e65c3da76","Type":"ContainerDied","Data":"c90866c136c366d5f870ba48955662b0f66c8e0794cf907765a3b7080400e2fc"} Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216347 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.216358 4902 scope.go:117] "RemoveContainer" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.250130 4902 scope.go:117] "RemoveContainer" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" Jan 21 15:58:31 crc kubenswrapper[4902]: E0121 15:58:31.252240 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad\": container with ID starting with ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad not found: ID does not exist" containerID="ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.308526 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad"} err="failed to get container status \"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad\": rpc error: code = NotFound desc = could not find container \"ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad\": container with ID starting with ffe884b5885fdbe3470f1de30238ff264aef7e6c4858819470e1224cb9e12cad not found: ID does not exist" Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.322289 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:31 crc kubenswrapper[4902]: I0121 15:58:31.327119 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:58:32 crc kubenswrapper[4902]: I0121 15:58:32.312166 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" path="/var/lib/kubelet/pods/1f924640-2d46-4126-b047-2d3e65c3da76/volumes" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.770678 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.771601 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.771713 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.772718 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:58:47 crc kubenswrapper[4902]: I0121 15:58:47.772855 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb" gracePeriod=600 Jan 21 15:58:48 crc kubenswrapper[4902]: I0121 15:58:48.365780 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb" exitCode=0 Jan 21 15:58:48 crc kubenswrapper[4902]: I0121 15:58:48.365821 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb"} Jan 21 15:58:48 crc kubenswrapper[4902]: I0121 15:58:48.365851 4902 scope.go:117] "RemoveContainer" containerID="75c0f5ae3bc21c340ea0e6051f58fe79169c11f10c8dc4eb0b937aaba4616eea" Jan 21 15:58:49 crc kubenswrapper[4902]: I0121 15:58:49.380113 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e"} Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.146578 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:00:00 crc kubenswrapper[4902]: E0121 16:00:00.147518 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.147534 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.147733 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f924640-2d46-4126-b047-2d3e65c3da76" containerName="mariadb-client" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.148479 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.151228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.151367 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.158234 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.283036 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.283389 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.283593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.385325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.385801 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.385902 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.386807 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.392002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.407340 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"collect-profiles-29483520-jmqbp\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:00 crc kubenswrapper[4902]: I0121 16:00:00.488335 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:01 crc kubenswrapper[4902]: I0121 16:00:01.022303 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:00:01 crc kubenswrapper[4902]: I0121 16:00:01.189750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" event={"ID":"2f705e9e-4608-4e35-9f28-665a52f2aba6","Type":"ContainerStarted","Data":"7f12a7c2197d8c4c6016efffad3ddef70f0efd243fb60d5e18a6123284b9f8d8"} Jan 21 16:00:02 crc kubenswrapper[4902]: I0121 16:00:02.199953 4902 generic.go:334] "Generic (PLEG): container finished" podID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerID="32fe8ff5a7cc5267205a3f1e8b759ee5d99a41ef6bca9732cd6d5478ff974b57" exitCode=0 Jan 21 16:00:02 crc kubenswrapper[4902]: I0121 16:00:02.200413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" event={"ID":"2f705e9e-4608-4e35-9f28-665a52f2aba6","Type":"ContainerDied","Data":"32fe8ff5a7cc5267205a3f1e8b759ee5d99a41ef6bca9732cd6d5478ff974b57"} Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.506040 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.635301 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") pod \"2f705e9e-4608-4e35-9f28-665a52f2aba6\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.635418 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") pod \"2f705e9e-4608-4e35-9f28-665a52f2aba6\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.635546 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") pod \"2f705e9e-4608-4e35-9f28-665a52f2aba6\" (UID: \"2f705e9e-4608-4e35-9f28-665a52f2aba6\") " Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.636906 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f705e9e-4608-4e35-9f28-665a52f2aba6" (UID: "2f705e9e-4608-4e35-9f28-665a52f2aba6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.642558 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls" (OuterVolumeSpecName: "kube-api-access-kb7ls") pod "2f705e9e-4608-4e35-9f28-665a52f2aba6" (UID: "2f705e9e-4608-4e35-9f28-665a52f2aba6"). InnerVolumeSpecName "kube-api-access-kb7ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.644690 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f705e9e-4608-4e35-9f28-665a52f2aba6" (UID: "2f705e9e-4608-4e35-9f28-665a52f2aba6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.737803 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f705e9e-4608-4e35-9f28-665a52f2aba6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.737841 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb7ls\" (UniqueName: \"kubernetes.io/projected/2f705e9e-4608-4e35-9f28-665a52f2aba6-kube-api-access-kb7ls\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4902]: I0121 16:00:03.737850 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f705e9e-4608-4e35-9f28-665a52f2aba6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.214518 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" event={"ID":"2f705e9e-4608-4e35-9f28-665a52f2aba6","Type":"ContainerDied","Data":"7f12a7c2197d8c4c6016efffad3ddef70f0efd243fb60d5e18a6123284b9f8d8"} Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.214575 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f12a7c2197d8c4c6016efffad3ddef70f0efd243fb60d5e18a6123284b9f8d8" Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.214573 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp" Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.578878 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 16:00:04 crc kubenswrapper[4902]: I0121 16:00:04.584345 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-zk92d"] Jan 21 16:00:06 crc kubenswrapper[4902]: I0121 16:00:06.304678 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd9484-ab72-4bbd-84e7-99f28795ad85" path="/var/lib/kubelet/pods/bebd9484-ab72-4bbd-84e7-99f28795ad85/volumes" Jan 21 16:00:24 crc kubenswrapper[4902]: I0121 16:00:24.069476 4902 scope.go:117] "RemoveContainer" containerID="8cefa707fcc5de9979cdbb8b42dd928ba6a77070fd6ce0a791939df6996a702e" Jan 21 16:00:24 crc kubenswrapper[4902]: I0121 16:00:24.116829 4902 scope.go:117] "RemoveContainer" containerID="5f2cc1ae5d9e64887200b316f71af17b596d6725e436d2e46c7acd03a38f0c75" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.602392 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:33 crc kubenswrapper[4902]: E0121 16:00:33.603358 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerName="collect-profiles" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.603375 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerName="collect-profiles" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.603557 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" containerName="collect-profiles" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.608205 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.622357 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.696710 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.696810 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.696948 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798267 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.798781 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.799271 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.818642 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"certified-operators-jsntb\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:33 crc kubenswrapper[4902]: I0121 16:00:33.926785 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:34 crc kubenswrapper[4902]: I0121 16:00:34.400699 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:34 crc kubenswrapper[4902]: I0121 16:00:34.462294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerStarted","Data":"4297834efcd126cd86d3b32bb7784ade06bfc81298e047d8a8b69f151674240f"} Jan 21 16:00:35 crc kubenswrapper[4902]: I0121 16:00:35.472934 4902 generic.go:334] "Generic (PLEG): container finished" podID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" exitCode=0 Jan 21 16:00:35 crc kubenswrapper[4902]: I0121 16:00:35.472986 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9"} Jan 21 16:00:36 crc kubenswrapper[4902]: I0121 16:00:36.481811 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerStarted","Data":"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6"} Jan 21 16:00:37 crc kubenswrapper[4902]: I0121 16:00:37.492966 4902 generic.go:334] "Generic (PLEG): container finished" podID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" exitCode=0 Jan 21 16:00:37 crc kubenswrapper[4902]: I0121 16:00:37.493132 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6"} Jan 21 16:00:38 crc kubenswrapper[4902]: I0121 16:00:38.502570 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerStarted","Data":"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076"} Jan 21 16:00:38 crc kubenswrapper[4902]: I0121 16:00:38.521328 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsntb" podStartSLOduration=3.093458177 podStartE2EDuration="5.521310605s" podCreationTimestamp="2026-01-21 16:00:33 +0000 UTC" firstStartedPulling="2026-01-21 16:00:35.477563288 +0000 UTC m=+5197.554396357" lastFinishedPulling="2026-01-21 16:00:37.905415756 +0000 UTC m=+5199.982248785" observedRunningTime="2026-01-21 16:00:38.518369252 +0000 UTC m=+5200.595202281" watchObservedRunningTime="2026-01-21 16:00:38.521310605 +0000 UTC m=+5200.598143634" Jan 21 16:00:43 crc kubenswrapper[4902]: I0121 16:00:43.928136 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:43 crc kubenswrapper[4902]: I0121 16:00:43.928796 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:43 crc kubenswrapper[4902]: I0121 16:00:43.971616 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:44 crc kubenswrapper[4902]: I0121 16:00:44.600705 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:44 crc kubenswrapper[4902]: I0121 16:00:44.661887 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:46 crc kubenswrapper[4902]: I0121 16:00:46.561977 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsntb" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" containerID="cri-o://0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" gracePeriod=2 Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.444555 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571522 4902 generic.go:334] "Generic (PLEG): container finished" podID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" exitCode=0 Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076"} Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsntb" event={"ID":"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346","Type":"ContainerDied","Data":"4297834efcd126cd86d3b32bb7784ade06bfc81298e047d8a8b69f151674240f"} Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571687 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsntb" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.571706 4902 scope.go:117] "RemoveContainer" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.598757 4902 scope.go:117] "RemoveContainer" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.615684 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") pod \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.615804 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") pod \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.615924 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") pod \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\" (UID: \"b0e8dd27-ba7e-45bf-81ba-fdeb819b8346\") " Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.617289 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities" (OuterVolumeSpecName: "utilities") pod "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" (UID: "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.624104 4902 scope.go:117] "RemoveContainer" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.624098 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5" (OuterVolumeSpecName: "kube-api-access-rvsg5") pod "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" (UID: "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346"). InnerVolumeSpecName "kube-api-access-rvsg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.665521 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" (UID: "b0e8dd27-ba7e-45bf-81ba-fdeb819b8346"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.671294 4902 scope.go:117] "RemoveContainer" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" Jan 21 16:00:47 crc kubenswrapper[4902]: E0121 16:00:47.671725 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076\": container with ID starting with 0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076 not found: ID does not exist" containerID="0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.671762 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076"} err="failed to get container status \"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076\": rpc error: code = NotFound desc = could not find container \"0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076\": container with ID starting with 0d02826ea4de94331c8656be6597d3b625c928aa2a2ff03d3bcdecd27a89e076 not found: ID does not exist" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.671799 4902 scope.go:117] "RemoveContainer" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" Jan 21 16:00:47 crc kubenswrapper[4902]: E0121 16:00:47.672151 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6\": container with ID starting with 2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6 not found: ID does not exist" containerID="2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.672180 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6"} err="failed to get container status \"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6\": rpc error: code = NotFound desc = could not find container \"2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6\": container with ID starting with 2753aeaed85fe381977a2c631be18cb8a7ff712e90b7fd7185ea4aab6b1b63d6 not found: ID does not exist" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.672194 4902 scope.go:117] "RemoveContainer" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" Jan 21 16:00:47 crc kubenswrapper[4902]: E0121 16:00:47.672695 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9\": container with ID starting with 5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9 not found: ID does not exist" containerID="5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.672757 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9"} err="failed to get container status \"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9\": rpc error: code = NotFound desc = could not find container \"5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9\": container with ID starting with 5bdfbc6fc880b43dbafcb855258304e457d71f395118e651b9def8eb000fc6c9 not found: ID does not exist" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.718014 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvsg5\" (UniqueName: \"kubernetes.io/projected/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-kube-api-access-rvsg5\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.718038 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.718070 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.919911 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:47 crc kubenswrapper[4902]: I0121 16:00:47.926002 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsntb"] Jan 21 16:00:48 crc kubenswrapper[4902]: I0121 16:00:48.303974 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" path="/var/lib/kubelet/pods/b0e8dd27-ba7e-45bf-81ba-fdeb819b8346/volumes" Jan 21 16:01:17 crc kubenswrapper[4902]: I0121 16:01:17.770096 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:17 crc kubenswrapper[4902]: I0121 16:01:17.770718 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:47 crc kubenswrapper[4902]: I0121 16:01:47.770201 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:47 crc kubenswrapper[4902]: I0121 16:01:47.770770 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.770983 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.771541 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.771607 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.772371 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:02:17 crc kubenswrapper[4902]: I0121 16:02:17.772665 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" gracePeriod=600 Jan 21 16:02:17 crc kubenswrapper[4902]: E0121 16:02:17.895376 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.263994 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" exitCode=0 Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.264038 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e"} Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.264117 4902 scope.go:117] "RemoveContainer" containerID="a22fa5f015dccd31057dbf3720918ed0aa27b09d4ac48d4d56f2468401c0c0fb" Jan 21 16:02:18 crc kubenswrapper[4902]: I0121 16:02:18.264662 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:18 crc kubenswrapper[4902]: E0121 16:02:18.264947 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.752243 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 16:02:21 crc kubenswrapper[4902]: E0121 16:02:21.753201 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-content" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753217 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-content" Jan 21 16:02:21 crc kubenswrapper[4902]: E0121 16:02:21.753237 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-utilities" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753246 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="extract-utilities" Jan 21 16:02:21 crc kubenswrapper[4902]: E0121 16:02:21.753260 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753266 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753453 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e8dd27-ba7e-45bf-81ba-fdeb819b8346" containerName="registry-server" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.753984 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.757566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.809807 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8tnn5" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.897450 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.897508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2nc\" (UniqueName: \"kubernetes.io/projected/45f02625-70e9-48ec-8dd4-a0bd456a283b-kube-api-access-4d2nc\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.999419 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:21 crc kubenswrapper[4902]: I0121 16:02:21.999482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d2nc\" (UniqueName: \"kubernetes.io/projected/45f02625-70e9-48ec-8dd4-a0bd456a283b-kube-api-access-4d2nc\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.003203 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.003255 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a0173fef46fde57e42c640e0fbcdc871aa92738e93088ab89d2b9968325093c/globalmount\"" pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.023865 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d2nc\" (UniqueName: \"kubernetes.io/projected/45f02625-70e9-48ec-8dd4-a0bd456a283b-kube-api-access-4d2nc\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.028980 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8c715e82-54b4-49f8-8bc2-33391c59a801\") pod \"mariadb-copy-data\" (UID: \"45f02625-70e9-48ec-8dd4-a0bd456a283b\") " pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.119579 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 16:02:22 crc kubenswrapper[4902]: I0121 16:02:22.609998 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 16:02:23 crc kubenswrapper[4902]: I0121 16:02:23.315778 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"45f02625-70e9-48ec-8dd4-a0bd456a283b","Type":"ContainerStarted","Data":"2f267b1e9fd95ab1d681b1fc71dd13c99d7242664f4622ef5a35f6cfcd7f0f68"} Jan 21 16:02:23 crc kubenswrapper[4902]: I0121 16:02:23.316148 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"45f02625-70e9-48ec-8dd4-a0bd456a283b","Type":"ContainerStarted","Data":"f03372433b6fe52b931300c0e8c5e006363900d659194ca0c879164110259772"} Jan 21 16:02:23 crc kubenswrapper[4902]: I0121 16:02:23.331834 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.331756924 podStartE2EDuration="3.331756924s" podCreationTimestamp="2026-01-21 16:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:23.330318704 +0000 UTC m=+5305.407151773" watchObservedRunningTime="2026-01-21 16:02:23.331756924 +0000 UTC m=+5305.408590013" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.140728 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.142556 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.148115 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.271196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"mariadb-client\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.372352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"mariadb-client\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.397368 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"mariadb-client\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.466337 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:26 crc kubenswrapper[4902]: I0121 16:02:26.948298 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:26 crc kubenswrapper[4902]: W0121 16:02:26.953346 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod770cc96e_3108_4294_aa08_d84995b87c15.slice/crio-b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df WatchSource:0}: Error finding container b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df: Status 404 returned error can't find the container with id b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df Jan 21 16:02:27 crc kubenswrapper[4902]: I0121 16:02:27.359875 4902 generic.go:334] "Generic (PLEG): container finished" podID="770cc96e-3108-4294-aa08-d84995b87c15" containerID="0b56fe28c730faebb9b858e50e97ecef1625af2c756c8684ae0d499694f95667" exitCode=0 Jan 21 16:02:27 crc kubenswrapper[4902]: I0121 16:02:27.359917 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"770cc96e-3108-4294-aa08-d84995b87c15","Type":"ContainerDied","Data":"0b56fe28c730faebb9b858e50e97ecef1625af2c756c8684ae0d499694f95667"} Jan 21 16:02:27 crc kubenswrapper[4902]: I0121 16:02:27.359942 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"770cc96e-3108-4294-aa08-d84995b87c15","Type":"ContainerStarted","Data":"b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df"} Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.690498 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.720171 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_770cc96e-3108-4294-aa08-d84995b87c15/mariadb-client/0.log" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.746991 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.752817 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.847103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") pod \"770cc96e-3108-4294-aa08-d84995b87c15\" (UID: \"770cc96e-3108-4294-aa08-d84995b87c15\") " Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.852346 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg" (OuterVolumeSpecName: "kube-api-access-kbptg") pod "770cc96e-3108-4294-aa08-d84995b87c15" (UID: "770cc96e-3108-4294-aa08-d84995b87c15"). InnerVolumeSpecName "kube-api-access-kbptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.879777 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: E0121 16:02:28.880183 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770cc96e-3108-4294-aa08-d84995b87c15" containerName="mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.880196 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="770cc96e-3108-4294-aa08-d84995b87c15" containerName="mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.880343 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="770cc96e-3108-4294-aa08-d84995b87c15" containerName="mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.880880 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.896872 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:28 crc kubenswrapper[4902]: I0121 16:02:28.948661 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbptg\" (UniqueName: \"kubernetes.io/projected/770cc96e-3108-4294-aa08-d84995b87c15-kube-api-access-kbptg\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.049706 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"mariadb-client\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.151673 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"mariadb-client\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.169307 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"mariadb-client\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.212448 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.379687 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b94e795b4c024b501fc55783ab36f8a15c3cbd039e88411bcd7c1ca7d2f943df" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.380031 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.404829 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="770cc96e-3108-4294-aa08-d84995b87c15" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" Jan 21 16:02:29 crc kubenswrapper[4902]: I0121 16:02:29.635074 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.304366 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770cc96e-3108-4294-aa08-d84995b87c15" path="/var/lib/kubelet/pods/770cc96e-3108-4294-aa08-d84995b87c15/volumes" Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.390275 4902 generic.go:334] "Generic (PLEG): container finished" podID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerID="7dcfa3ca5d15f7808f5af95de45f6bb83034e3c73c913f10478deaba94fa2fdd" exitCode=0 Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.390320 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7abbaa6f-fb64-458a-bf9c-1fd63370b978","Type":"ContainerDied","Data":"7dcfa3ca5d15f7808f5af95de45f6bb83034e3c73c913f10478deaba94fa2fdd"} Jan 21 16:02:30 crc kubenswrapper[4902]: I0121 16:02:30.390362 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7abbaa6f-fb64-458a-bf9c-1fd63370b978","Type":"ContainerStarted","Data":"bd37f84ca51450b73121db67e591ddfc7fa9317a18639717d3561bf6355411ed"} Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.695271 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.718536 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_7abbaa6f-fb64-458a-bf9c-1fd63370b978/mariadb-client/0.log" Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.744575 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.750926 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.793096 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") pod \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\" (UID: \"7abbaa6f-fb64-458a-bf9c-1fd63370b978\") " Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.801377 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2" (OuterVolumeSpecName: "kube-api-access-blmx2") pod "7abbaa6f-fb64-458a-bf9c-1fd63370b978" (UID: "7abbaa6f-fb64-458a-bf9c-1fd63370b978"). InnerVolumeSpecName "kube-api-access-blmx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:31 crc kubenswrapper[4902]: I0121 16:02:31.894727 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmx2\" (UniqueName: \"kubernetes.io/projected/7abbaa6f-fb64-458a-bf9c-1fd63370b978-kube-api-access-blmx2\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:32 crc kubenswrapper[4902]: I0121 16:02:32.306633 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" path="/var/lib/kubelet/pods/7abbaa6f-fb64-458a-bf9c-1fd63370b978/volumes" Jan 21 16:02:32 crc kubenswrapper[4902]: I0121 16:02:32.406088 4902 scope.go:117] "RemoveContainer" containerID="7dcfa3ca5d15f7808f5af95de45f6bb83034e3c73c913f10478deaba94fa2fdd" Jan 21 16:02:32 crc kubenswrapper[4902]: I0121 16:02:32.406112 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 16:02:33 crc kubenswrapper[4902]: I0121 16:02:33.295396 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:33 crc kubenswrapper[4902]: E0121 16:02:33.295747 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:45 crc kubenswrapper[4902]: I0121 16:02:45.295227 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:45 crc kubenswrapper[4902]: E0121 16:02:45.295969 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:02:58 crc kubenswrapper[4902]: I0121 16:02:58.302289 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:02:58 crc kubenswrapper[4902]: E0121 16:02:58.302982 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.136843 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:03 crc kubenswrapper[4902]: E0121 16:03:03.137930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerName="mariadb-client" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.137959 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerName="mariadb-client" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.138490 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7abbaa6f-fb64-458a-bf9c-1fd63370b978" containerName="mariadb-client" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.140621 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.146801 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.334255 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.334348 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.334746 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436248 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.436849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.437541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.456078 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"redhat-marketplace-hdb6q\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.474444 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:03 crc kubenswrapper[4902]: I0121 16:03:03.903560 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:04 crc kubenswrapper[4902]: I0121 16:03:04.681364 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" exitCode=0 Jan 21 16:03:04 crc kubenswrapper[4902]: I0121 16:03:04.681717 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d"} Jan 21 16:03:04 crc kubenswrapper[4902]: I0121 16:03:04.681756 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerStarted","Data":"be02b642998606b42c51b3f9e618047463c4734c56781cfc76114a147ceeff59"} Jan 21 16:03:05 crc kubenswrapper[4902]: I0121 16:03:05.691204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerStarted","Data":"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff"} Jan 21 16:03:06 crc kubenswrapper[4902]: I0121 16:03:06.701418 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" exitCode=0 Jan 21 16:03:06 crc kubenswrapper[4902]: I0121 16:03:06.701471 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff"} Jan 21 16:03:07 crc kubenswrapper[4902]: I0121 16:03:07.712957 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerStarted","Data":"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450"} Jan 21 16:03:07 crc kubenswrapper[4902]: I0121 16:03:07.742860 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hdb6q" podStartSLOduration=2.169493494 podStartE2EDuration="4.742836033s" podCreationTimestamp="2026-01-21 16:03:03 +0000 UTC" firstStartedPulling="2026-01-21 16:03:04.684188661 +0000 UTC m=+5346.761021730" lastFinishedPulling="2026-01-21 16:03:07.25753121 +0000 UTC m=+5349.334364269" observedRunningTime="2026-01-21 16:03:07.740793036 +0000 UTC m=+5349.817626085" watchObservedRunningTime="2026-01-21 16:03:07.742836033 +0000 UTC m=+5349.819669062" Jan 21 16:03:12 crc kubenswrapper[4902]: I0121 16:03:12.294527 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:12 crc kubenswrapper[4902]: E0121 16:03:12.295325 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.475376 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.475455 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.533840 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.824962 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:13 crc kubenswrapper[4902]: I0121 16:03:13.888697 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:15 crc kubenswrapper[4902]: I0121 16:03:15.770323 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hdb6q" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" containerID="cri-o://83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" gracePeriod=2 Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.205847 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.239986 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") pod \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.240028 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") pod \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.240089 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") pod \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\" (UID: \"1a0bbdbe-ee51-4b19-be3e-446c55d329ce\") " Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.242714 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities" (OuterVolumeSpecName: "utilities") pod "1a0bbdbe-ee51-4b19-be3e-446c55d329ce" (UID: "1a0bbdbe-ee51-4b19-be3e-446c55d329ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.262342 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc" (OuterVolumeSpecName: "kube-api-access-lkspc") pod "1a0bbdbe-ee51-4b19-be3e-446c55d329ce" (UID: "1a0bbdbe-ee51-4b19-be3e-446c55d329ce"). InnerVolumeSpecName "kube-api-access-lkspc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.297229 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a0bbdbe-ee51-4b19-be3e-446c55d329ce" (UID: "1a0bbdbe-ee51-4b19-be3e-446c55d329ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.341769 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkspc\" (UniqueName: \"kubernetes.io/projected/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-kube-api-access-lkspc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.341809 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.341820 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0bbdbe-ee51-4b19-be3e-446c55d329ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778872 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" exitCode=0 Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778920 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450"} Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778946 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdb6q" event={"ID":"1a0bbdbe-ee51-4b19-be3e-446c55d329ce","Type":"ContainerDied","Data":"be02b642998606b42c51b3f9e618047463c4734c56781cfc76114a147ceeff59"} Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778955 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdb6q" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.778964 4902 scope.go:117] "RemoveContainer" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.799337 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.805960 4902 scope.go:117] "RemoveContainer" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.808303 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdb6q"] Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.824066 4902 scope.go:117] "RemoveContainer" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.856581 4902 scope.go:117] "RemoveContainer" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" Jan 21 16:03:16 crc kubenswrapper[4902]: E0121 16:03:16.857075 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450\": container with ID starting with 83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450 not found: ID does not exist" containerID="83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857106 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450"} err="failed to get container status \"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450\": rpc error: code = NotFound desc = could not find container \"83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450\": container with ID starting with 83b12c6a4998e169b12750d5ebb1753cf19bc3eab4cf8508c40ab4003384e450 not found: ID does not exist" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857127 4902 scope.go:117] "RemoveContainer" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" Jan 21 16:03:16 crc kubenswrapper[4902]: E0121 16:03:16.857496 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff\": container with ID starting with 79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff not found: ID does not exist" containerID="79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857520 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff"} err="failed to get container status \"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff\": rpc error: code = NotFound desc = could not find container \"79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff\": container with ID starting with 79c287022aab0bb7b1b2cb0a387d299d2e6a65dc3b8f58bb2a04cbb81249a9ff not found: ID does not exist" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857556 4902 scope.go:117] "RemoveContainer" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" Jan 21 16:03:16 crc kubenswrapper[4902]: E0121 16:03:16.857816 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d\": container with ID starting with fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d not found: ID does not exist" containerID="fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d" Jan 21 16:03:16 crc kubenswrapper[4902]: I0121 16:03:16.857839 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d"} err="failed to get container status \"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d\": rpc error: code = NotFound desc = could not find container \"fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d\": container with ID starting with fc65425f6acdcd8ad1064e631ef6b438360961113ec78db9f0ab29fe0a7d077d not found: ID does not exist" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123374 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:17 crc kubenswrapper[4902]: E0121 16:03:17.123822 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-utilities" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123851 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-utilities" Jan 21 16:03:17 crc kubenswrapper[4902]: E0121 16:03:17.123882 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123894 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" Jan 21 16:03:17 crc kubenswrapper[4902]: E0121 16:03:17.123912 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-content" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.123923 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="extract-content" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.124183 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" containerName="registry-server" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.125485 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.133962 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134415 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134589 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134613 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.134701 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lddmf" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.135123 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.135263 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.146393 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.148236 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.155862 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.169180 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.198057 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj2x\" (UniqueName: \"kubernetes.io/projected/69d6d956-f400-4339-8b68-c2644bb9b9eb-kube-api-access-5gj2x\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255264 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255403 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255486 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255552 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255731 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.255826 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.256671 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257038 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257197 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257415 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqfx\" (UniqueName: \"kubernetes.io/projected/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-kube-api-access-fdqfx\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-config\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257743 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257829 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257876 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-config\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.257912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258113 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258220 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54thc\" (UniqueName: \"kubernetes.io/projected/52b530ea-b7ee-4420-a3d6-d140ac75c474-kube-api-access-54thc\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258250 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258373 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258420 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.258457 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.359960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360264 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360341 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360413 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gj2x\" (UniqueName: \"kubernetes.io/projected/69d6d956-f400-4339-8b68-c2644bb9b9eb-kube-api-access-5gj2x\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360474 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360615 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360750 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360844 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.360924 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361036 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361153 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361273 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdqfx\" (UniqueName: \"kubernetes.io/projected/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-kube-api-access-fdqfx\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361390 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-config\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361628 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361761 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361879 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-config\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361934 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362146 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362289 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54thc\" (UniqueName: \"kubernetes.io/projected/52b530ea-b7ee-4420-a3d6-d140ac75c474-kube-api-access-54thc\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.362869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-config\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.363211 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.363505 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-config\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.361775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.364720 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52b530ea-b7ee-4420-a3d6-d140ac75c474-config\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.364985 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.365002 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366124 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52b530ea-b7ee-4420-a3d6-d140ac75c474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366299 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366878 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.366918 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6376770102d5a2c9cc14d8ba869f07cb47b601e45a29b1ab6d31477a59155ada/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367020 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367026 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367125 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fddab078abbcdd19a8bb025b73441ff56000f64db216868e3fba63259e5ac188/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367719 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.367752 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9dd2dd440dd20c9c629c0b341fc66910819ff310cbb68fa497bc94336f1aa38e/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.374084 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.375200 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.381371 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69d6d956-f400-4339-8b68-c2644bb9b9eb-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.385560 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69d6d956-f400-4339-8b68-c2644bb9b9eb-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.389424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54thc\" (UniqueName: \"kubernetes.io/projected/52b530ea-b7ee-4420-a3d6-d140ac75c474-kube-api-access-54thc\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.389762 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdqfx\" (UniqueName: \"kubernetes.io/projected/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-kube-api-access-fdqfx\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.391361 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf74aba-3fc7-42ea-9537-a176dbf2a2e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.394572 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gj2x\" (UniqueName: \"kubernetes.io/projected/69d6d956-f400-4339-8b68-c2644bb9b9eb-kube-api-access-5gj2x\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.447027 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97432595-5de6-4cac-a1f3-f654f9ca8b9c\") pod \"ovsdbserver-nb-0\" (UID: \"52b530ea-b7ee-4420-a3d6-d140ac75c474\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.457355 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-560e1c15-7dc7-46b7-928f-ec627b7d70dc\") pod \"ovsdbserver-nb-1\" (UID: \"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2\") " pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.460770 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b7b2b112-c375-4213-a11d-eed088866ef0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7b2b112-c375-4213-a11d-eed088866ef0\") pod \"ovsdbserver-nb-2\" (UID: \"69d6d956-f400-4339-8b68-c2644bb9b9eb\") " pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.467627 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.742929 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:17 crc kubenswrapper[4902]: I0121 16:03:17.752833 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.019557 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.178695 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.307693 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0bbdbe-ee51-4b19-be3e-446c55d329ce" path="/var/lib/kubelet/pods/1a0bbdbe-ee51-4b19-be3e-446c55d329ce/volumes" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.564541 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.668873 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.670263 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.678653 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.679088 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.679975 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qvwb5" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.680116 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.693758 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.700697 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.702129 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.710913 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.712558 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.741737 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.749971 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799284 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799364 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799410 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799452 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-config\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799505 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799545 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8scp\" (UniqueName: \"kubernetes.io/projected/fa609e80-09d5-4393-a79f-9989f9223bdd-kube-api-access-m8scp\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799585 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.799629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.802904 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"69d6d956-f400-4339-8b68-c2644bb9b9eb","Type":"ContainerStarted","Data":"713ed28339ce0c81664640268861d087ecbd17dc1c3484e78c87508b0a91ff6a"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.802953 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"69d6d956-f400-4339-8b68-c2644bb9b9eb","Type":"ContainerStarted","Data":"221cd96926971a5fc0702e0876716e32692bc5a1e0c503e68787f580e8acdd64"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.804508 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2","Type":"ContainerStarted","Data":"0b77c9c9e0e00ad5e813e70ecba66df8b2621c5aa975b59b640426d494976851"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.807492 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52b530ea-b7ee-4420-a3d6-d140ac75c474","Type":"ContainerStarted","Data":"b59eecebe42c6fff02b6e60d8280e8d78a547b8431fe1ee315d35b50d66ecb9b"} Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.900986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8scp\" (UniqueName: \"kubernetes.io/projected/fa609e80-09d5-4393-a79f-9989f9223bdd-kube-api-access-m8scp\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901087 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901124 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-config\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901187 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901213 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901305 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901327 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901373 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4mf\" (UniqueName: \"kubernetes.io/projected/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-kube-api-access-4j4mf\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901399 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901423 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901458 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901482 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-config\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901522 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901557 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901588 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901629 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901667 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-config\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901706 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdls\" (UniqueName: \"kubernetes.io/projected/51aa3a3a-61f9-4757-b302-aa170904d97f-kube-api-access-2sdls\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901765 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901802 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901847 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.901869 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.904648 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.904798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.907780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.908239 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa609e80-09d5-4393-a79f-9989f9223bdd-config\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.909037 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.909162 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16996701e509ddf4c9edcb9d835961256510b9b47ca58d41158d6daa37486c0d/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.923540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.923847 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa609e80-09d5-4393-a79f-9989f9223bdd-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.924765 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8scp\" (UniqueName: \"kubernetes.io/projected/fa609e80-09d5-4393-a79f-9989f9223bdd-kube-api-access-m8scp\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.962890 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8778a376-340b-45e3-b39d-cce41c466f3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8778a376-340b-45e3-b39d-cce41c466f3b\") pod \"ovsdbserver-sb-0\" (UID: \"fa609e80-09d5-4393-a79f-9989f9223bdd\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:18 crc kubenswrapper[4902]: I0121 16:03:18.998478 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003607 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003627 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-config\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003646 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003689 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003708 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4mf\" (UniqueName: \"kubernetes.io/projected/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-kube-api-access-4j4mf\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003724 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003762 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-config\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003801 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003899 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdls\" (UniqueName: \"kubernetes.io/projected/51aa3a3a-61f9-4757-b302-aa170904d97f-kube-api-access-2sdls\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003919 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.003937 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.004379 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.007176 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.008025 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.008829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.009764 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.009771 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-config\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.011970 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51aa3a3a-61f9-4757-b302-aa170904d97f-config\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012223 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012255 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd25d4049fb12958d19ade43ecb8c4f5b4b71e9f8765ef9f5523c9eb7e9acecf/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012428 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012460 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0fec25508ac70e473cfdac92e070550d89bcff2b29e75e1519b09d4ff6b8c411/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.012957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.013512 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.020856 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51aa3a3a-61f9-4757-b302-aa170904d97f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.024856 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.027766 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.033062 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdls\" (UniqueName: \"kubernetes.io/projected/51aa3a3a-61f9-4757-b302-aa170904d97f-kube-api-access-2sdls\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.034705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4mf\" (UniqueName: \"kubernetes.io/projected/aadc3978-ec1c-4d8d-8d02-f199d6509d5c-kube-api-access-4j4mf\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.063101 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fd108b96-2c94-42eb-9abe-885541aaa945\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fd108b96-2c94-42eb-9abe-885541aaa945\") pod \"ovsdbserver-sb-2\" (UID: \"aadc3978-ec1c-4d8d-8d02-f199d6509d5c\") " pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.112181 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72c95061-8905-4c50-892f-e1dd13b791b8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72c95061-8905-4c50-892f-e1dd13b791b8\") pod \"ovsdbserver-sb-1\" (UID: \"51aa3a3a-61f9-4757-b302-aa170904d97f\") " pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.322898 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.339537 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.682545 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.785096 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 16:03:19 crc kubenswrapper[4902]: W0121 16:03:19.788852 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51aa3a3a_61f9_4757_b302_aa170904d97f.slice/crio-4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42 WatchSource:0}: Error finding container 4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42: Status 404 returned error can't find the container with id 4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42 Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.819738 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"69d6d956-f400-4339-8b68-c2644bb9b9eb","Type":"ContainerStarted","Data":"0bb79ecdee28f880e552556bd4939429494e9bfb063f343efcfff77bd792cec7"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.821975 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fa609e80-09d5-4393-a79f-9989f9223bdd","Type":"ContainerStarted","Data":"da042cb7c84cc2b2b529422e8ce70b299f659710f1acddb459383fabee51e5b2"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.824546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"51aa3a3a-61f9-4757-b302-aa170904d97f","Type":"ContainerStarted","Data":"4d3a3d57b01b7fd6191f348740c7d38ae92c7b48e105628aec4c7d9f51b0db42"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.826391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2","Type":"ContainerStarted","Data":"6bd048219e1683d80362f11d25faacd5fbaea2be7eb1c853f5765fc8ed6e2eaf"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.826426 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"fcf74aba-3fc7-42ea-9537-a176dbf2a2e2","Type":"ContainerStarted","Data":"beb542aba278782fbe8b9bf6055438f73de81761200158808ae24634ea7e7086"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.828692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52b530ea-b7ee-4420-a3d6-d140ac75c474","Type":"ContainerStarted","Data":"9f67e58fbff69968a740146c35b211053897854c82bac23fba02de5d144e6d83"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.828724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"52b530ea-b7ee-4420-a3d6-d140ac75c474","Type":"ContainerStarted","Data":"78bd186e6934af7ccfa619d99513cb0cc5fc08849adf798d2ae9ea056cc9c7e0"} Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.841408 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.8413909520000002 podStartE2EDuration="3.841390952s" podCreationTimestamp="2026-01-21 16:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:19.837639357 +0000 UTC m=+5361.914472386" watchObservedRunningTime="2026-01-21 16:03:19.841390952 +0000 UTC m=+5361.918223981" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.869091 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.869022096 podStartE2EDuration="3.869022096s" podCreationTimestamp="2026-01-21 16:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:19.866338301 +0000 UTC m=+5361.943171340" watchObservedRunningTime="2026-01-21 16:03:19.869022096 +0000 UTC m=+5361.945855125" Jan 21 16:03:19 crc kubenswrapper[4902]: I0121 16:03:19.899343 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.899320435 podStartE2EDuration="3.899320435s" podCreationTimestamp="2026-01-21 16:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:19.89093651 +0000 UTC m=+5361.967769539" watchObservedRunningTime="2026-01-21 16:03:19.899320435 +0000 UTC m=+5361.976153464" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.043393 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 16:03:20 crc kubenswrapper[4902]: W0121 16:03:20.055472 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaadc3978_ec1c_4d8d_8d02_f199d6509d5c.slice/crio-045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e WatchSource:0}: Error finding container 045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e: Status 404 returned error can't find the container with id 045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.469285 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.743213 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.753440 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.848092 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"aadc3978-ec1c-4d8d-8d02-f199d6509d5c","Type":"ContainerStarted","Data":"10687baf2ccc1b8ec1e74cffaedab771b98c26f77c65e3480a13405b39359195"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.848136 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"aadc3978-ec1c-4d8d-8d02-f199d6509d5c","Type":"ContainerStarted","Data":"13fa5dffe47aa2b6494ef06624514684be683d0bccb540b1717faf78034c328f"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.848147 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"aadc3978-ec1c-4d8d-8d02-f199d6509d5c","Type":"ContainerStarted","Data":"045bbc59816fd15959ce4cc70990898c0f1a7e33935d1c77333184402dd6365e"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.850937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fa609e80-09d5-4393-a79f-9989f9223bdd","Type":"ContainerStarted","Data":"4945eee6f6827bbe6ae155b3c5ba7fe09ba1f585ab8f384f06172e48c8a2b5fb"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.850985 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fa609e80-09d5-4393-a79f-9989f9223bdd","Type":"ContainerStarted","Data":"9a2c03eff2ca62968b807f2db3f48c3fa66978afc8251433c37ed434bb6584b8"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.853714 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"51aa3a3a-61f9-4757-b302-aa170904d97f","Type":"ContainerStarted","Data":"86d76922527bd27c8e68b8440afffd36010d98c79654cae923ade659da2b07e5"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.853865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"51aa3a3a-61f9-4757-b302-aa170904d97f","Type":"ContainerStarted","Data":"45b8282f46f807884a3909c1753ccd4087463b3ae218c1364b729c1584ce0f88"} Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.867245 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.867221186 podStartE2EDuration="3.867221186s" podCreationTimestamp="2026-01-21 16:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:20.866098004 +0000 UTC m=+5362.942931043" watchObservedRunningTime="2026-01-21 16:03:20.867221186 +0000 UTC m=+5362.944054215" Jan 21 16:03:20 crc kubenswrapper[4902]: I0121 16:03:20.893610 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.893579034 podStartE2EDuration="3.893579034s" podCreationTimestamp="2026-01-21 16:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:20.891140306 +0000 UTC m=+5362.967973335" watchObservedRunningTime="2026-01-21 16:03:20.893579034 +0000 UTC m=+5362.970412063" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:21.999733 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.323182 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.340469 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.468769 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.743129 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:22 crc kubenswrapper[4902]: I0121 16:03:22.753282 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.512790 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.538372 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=6.538344884 podStartE2EDuration="6.538344884s" podCreationTimestamp="2026-01-21 16:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:20.911446025 +0000 UTC m=+5362.988279054" watchObservedRunningTime="2026-01-21 16:03:23.538344884 +0000 UTC m=+5365.615177913" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.789723 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.803779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.914476 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.922759 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.924867 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 21 16:03:23 crc kubenswrapper[4902]: I0121 16:03:23.999468 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.100897 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.102211 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.105757 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.125264 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207073 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207156 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207240 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.207280 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.212054 4902 scope.go:117] "RemoveContainer" containerID="22e6c4bda8a7a16db8551cc07c7e5779cb515519925f610dd7708c60c5c8a6fc" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308802 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308847 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308906 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.308932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.309841 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.309952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.310056 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.323385 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.332249 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"dnsmasq-dns-d944468c-9qwvt\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.340542 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.422791 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.868063 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:24 crc kubenswrapper[4902]: I0121 16:03:24.893887 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerStarted","Data":"d2474e93bef33df41251ab8aed435fd5eabbbc74ae11fd5542967e63203c6e50"} Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.046968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.092619 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.354701 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.378369 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.382653 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.384440 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.386396 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.403324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.431405 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.433961 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434002 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434154 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434281 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.434342 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.437421 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.484605 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535335 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535415 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535469 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.535561 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.536592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.537220 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.537987 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.538201 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.557030 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"dnsmasq-dns-57f688859c-fb82z\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.721524 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.913239 4902 generic.go:334] "Generic (PLEG): container finished" podID="eb464c35-1456-495f-bbc0-3d23c076af70" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" exitCode=0 Jan 21 16:03:25 crc kubenswrapper[4902]: I0121 16:03:25.913297 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerDied","Data":"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1"} Jan 21 16:03:26 crc kubenswrapper[4902]: W0121 16:03:26.143165 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfebc8b_bc5c_4214_acee_021a404994bf.slice/crio-f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0 WatchSource:0}: Error finding container f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0: Status 404 returned error can't find the container with id f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0 Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.144804 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:03:26 crc kubenswrapper[4902]: E0121 16:03:26.452461 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfebc8b_bc5c_4214_acee_021a404994bf.slice/crio-311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.922543 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerID="311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4" exitCode=0 Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.922609 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerDied","Data":"311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4"} Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.922638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerStarted","Data":"f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0"} Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.925361 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerStarted","Data":"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8"} Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.925516 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d944468c-9qwvt" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" containerID="cri-o://59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" gracePeriod=10 Jan 21 16:03:26 crc kubenswrapper[4902]: I0121 16:03:26.925603 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.010771 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d944468c-9qwvt" podStartSLOduration=3.010752995 podStartE2EDuration="3.010752995s" podCreationTimestamp="2026-01-21 16:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:27.008852362 +0000 UTC m=+5369.085685381" watchObservedRunningTime="2026-01-21 16:03:27.010752995 +0000 UTC m=+5369.087586024" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.294882 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.295116 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.549599 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.676417 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.677517 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.677631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.677771 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") pod \"eb464c35-1456-495f-bbc0-3d23c076af70\" (UID: \"eb464c35-1456-495f-bbc0-3d23c076af70\") " Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.681717 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq" (OuterVolumeSpecName: "kube-api-access-lr2fq") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "kube-api-access-lr2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.722576 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.724515 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config" (OuterVolumeSpecName: "config") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.726325 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb464c35-1456-495f-bbc0-3d23c076af70" (UID: "eb464c35-1456-495f-bbc0-3d23c076af70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780119 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780339 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr2fq\" (UniqueName: \"kubernetes.io/projected/eb464c35-1456-495f-bbc0-3d23c076af70-kube-api-access-lr2fq\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780414 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.780505 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb464c35-1456-495f-bbc0-3d23c076af70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837156 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.837503 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="init" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837515 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="init" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.837527 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837532 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.837691 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" containerName="dnsmasq-dns" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.838397 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.841441 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.850558 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.934807 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerStarted","Data":"401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13"} Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.934981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938116 4902 generic.go:334] "Generic (PLEG): container finished" podID="eb464c35-1456-495f-bbc0-3d23c076af70" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" exitCode=0 Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerDied","Data":"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8"} Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938162 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d944468c-9qwvt" event={"ID":"eb464c35-1456-495f-bbc0-3d23c076af70","Type":"ContainerDied","Data":"d2474e93bef33df41251ab8aed435fd5eabbbc74ae11fd5542967e63203c6e50"} Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938179 4902 scope.go:117] "RemoveContainer" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.938325 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d944468c-9qwvt" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.963750 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57f688859c-fb82z" podStartSLOduration=2.963730228 podStartE2EDuration="2.963730228s" podCreationTimestamp="2026-01-21 16:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:27.953610625 +0000 UTC m=+5370.030443654" watchObservedRunningTime="2026-01-21 16:03:27.963730228 +0000 UTC m=+5370.040563257" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.965350 4902 scope.go:117] "RemoveContainer" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.975546 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.982997 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d944468c-9qwvt"] Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.983446 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/15260f61-f63b-48cf-8c1d-1269ed5264d6-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.983559 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p997k\" (UniqueName: \"kubernetes.io/projected/15260f61-f63b-48cf-8c1d-1269ed5264d6-kube-api-access-p997k\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.983599 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.986120 4902 scope.go:117] "RemoveContainer" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.986506 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8\": container with ID starting with 59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8 not found: ID does not exist" containerID="59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.986564 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8"} err="failed to get container status \"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8\": rpc error: code = NotFound desc = could not find container \"59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8\": container with ID starting with 59eb301647ef857e90ea9a6784562c5020642942feccfd960ecc328c0498a8c8 not found: ID does not exist" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.986598 4902 scope.go:117] "RemoveContainer" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" Jan 21 16:03:27 crc kubenswrapper[4902]: E0121 16:03:27.987093 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1\": container with ID starting with ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1 not found: ID does not exist" containerID="ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1" Jan 21 16:03:27 crc kubenswrapper[4902]: I0121 16:03:27.987125 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1"} err="failed to get container status \"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1\": rpc error: code = NotFound desc = could not find container \"ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1\": container with ID starting with ee8cc873d94ba2c61bfc3ae5a338e0bc1cd25a2108c8845b199b067cbaaa4db1 not found: ID does not exist" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.084718 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/15260f61-f63b-48cf-8c1d-1269ed5264d6-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.085087 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p997k\" (UniqueName: \"kubernetes.io/projected/15260f61-f63b-48cf-8c1d-1269ed5264d6-kube-api-access-p997k\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.085203 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.087633 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.087676 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d0337883da665e7f9f3b16b7d379ef59044766ce24adb35f8e12bf624dbdf08/globalmount\"" pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.088928 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/15260f61-f63b-48cf-8c1d-1269ed5264d6-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.100848 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p997k\" (UniqueName: \"kubernetes.io/projected/15260f61-f63b-48cf-8c1d-1269ed5264d6-kube-api-access-p997k\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.112576 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-292168a9-bbdb-49af-9ca7-c15d08ebd2ba\") pod \"ovn-copy-data\" (UID: \"15260f61-f63b-48cf-8c1d-1269ed5264d6\") " pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.164701 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.308775 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb464c35-1456-495f-bbc0-3d23c076af70" path="/var/lib/kubelet/pods/eb464c35-1456-495f-bbc0-3d23c076af70/volumes" Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.673636 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.683675 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:03:28 crc kubenswrapper[4902]: I0121 16:03:28.948429 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"15260f61-f63b-48cf-8c1d-1269ed5264d6","Type":"ContainerStarted","Data":"4aa99197a4359638f0a077c3b965835f3f1ebab8569a45c63b721fb146eec322"} Jan 21 16:03:29 crc kubenswrapper[4902]: I0121 16:03:29.962926 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"15260f61-f63b-48cf-8c1d-1269ed5264d6","Type":"ContainerStarted","Data":"ff53c1d391c5b288428658245a6576a3b8ff756bb1960088cd4ffdc889080fc5"} Jan 21 16:03:29 crc kubenswrapper[4902]: I0121 16:03:29.986887 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.249788301 podStartE2EDuration="3.986824324s" podCreationTimestamp="2026-01-21 16:03:26 +0000 UTC" firstStartedPulling="2026-01-21 16:03:28.68316858 +0000 UTC m=+5370.760001609" lastFinishedPulling="2026-01-21 16:03:29.420204553 +0000 UTC m=+5371.497037632" observedRunningTime="2026-01-21 16:03:29.977645797 +0000 UTC m=+5372.054478846" watchObservedRunningTime="2026-01-21 16:03:29.986824324 +0000 UTC m=+5372.063657363" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.243375 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.245552 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.255859 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.256665 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.257872 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9hts2" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.259986 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.285348 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306295 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-scripts\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-config\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306687 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9rh\" (UniqueName: \"kubernetes.io/projected/b8db1a8e-13c3-41be-9f21-24077d0e4e29-kube-api-access-4d9rh\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306752 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306861 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.306944 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408000 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-scripts\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408235 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408359 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-config\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408447 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9rh\" (UniqueName: \"kubernetes.io/projected/b8db1a8e-13c3-41be-9f21-24077d0e4e29-kube-api-access-4d9rh\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408529 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408648 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408900 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-scripts\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.408932 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.409355 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8db1a8e-13c3-41be-9f21-24077d0e4e29-config\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.414852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.417287 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.417718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8db1a8e-13c3-41be-9f21-24077d0e4e29-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.434997 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9rh\" (UniqueName: \"kubernetes.io/projected/b8db1a8e-13c3-41be-9f21-24077d0e4e29-kube-api-access-4d9rh\") pod \"ovn-northd-0\" (UID: \"b8db1a8e-13c3-41be-9f21-24077d0e4e29\") " pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.561393 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.724879 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.805998 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 16:03:35 crc kubenswrapper[4902]: I0121 16:03:35.806314 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" containerID="cri-o://dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133" gracePeriod=10 Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.024309 4902 generic.go:334] "Generic (PLEG): container finished" podID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerID="dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133" exitCode=0 Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.024643 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerDied","Data":"dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133"} Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.079939 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.193801 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.324268 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") pod \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.324496 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") pod \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.324564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") pod \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\" (UID: \"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495\") " Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.329092 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb" (OuterVolumeSpecName: "kube-api-access-5fngb") pod "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" (UID: "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495"). InnerVolumeSpecName "kube-api-access-5fngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.376171 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" (UID: "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.383154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config" (OuterVolumeSpecName: "config") pod "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" (UID: "9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.426034 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.426094 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:36 crc kubenswrapper[4902]: I0121 16:03:36.426109 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fngb\" (UniqueName: \"kubernetes.io/projected/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495-kube-api-access-5fngb\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.037306 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" event={"ID":"9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495","Type":"ContainerDied","Data":"44ae670f7eef2a4f69445e5a528bd2462006fde1e2ee9d0bfd1314e5b3fef469"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.037360 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-tv7h5" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.037403 4902 scope.go:117] "RemoveContainer" containerID="dcfdb86c7fc37ba60155fa847c386b16cdc514c878f35f9ebd3ae35f87d4d133" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040335 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b8db1a8e-13c3-41be-9f21-24077d0e4e29","Type":"ContainerStarted","Data":"26ef1b8702c0d19e28bd0877e194c280b8dda6c145c70003041ce54fd44e2cff"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040368 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b8db1a8e-13c3-41be-9f21-24077d0e4e29","Type":"ContainerStarted","Data":"d816bffd3d2f2355cd8337b089c56d42732f2bcb190939b8a40a6c3af3c7b3c7"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040386 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b8db1a8e-13c3-41be-9f21-24077d0e4e29","Type":"ContainerStarted","Data":"eb4275d52e1ff49ba1311059884959115b5ec6aff61502ea0194dc7d6ef1c53c"} Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.040611 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.074623 4902 scope.go:117] "RemoveContainer" containerID="55e0afd2388c802fda6ed46b943d3d217c1e55bd357a95a943ea57a5cb135bcf" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.082849 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.082831822 podStartE2EDuration="2.082831822s" podCreationTimestamp="2026-01-21 16:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:37.07095489 +0000 UTC m=+5379.147787919" watchObservedRunningTime="2026-01-21 16:03:37.082831822 +0000 UTC m=+5379.159664851" Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.087326 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 16:03:37 crc kubenswrapper[4902]: I0121 16:03:37.092776 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-tv7h5"] Jan 21 16:03:38 crc kubenswrapper[4902]: I0121 16:03:38.304271 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" path="/var/lib/kubelet/pods/9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495/volumes" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.146727 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:03:40 crc kubenswrapper[4902]: E0121 16:03:40.147256 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="init" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147268 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="init" Jan 21 16:03:40 crc kubenswrapper[4902]: E0121 16:03:40.147288 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147295 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147478 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa5aefc-ede5-4d16-a99b-1ea1b2ff2495" containerName="dnsmasq-dns" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.147979 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.166594 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.244888 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.245870 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.248907 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.260689 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.294665 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:40 crc kubenswrapper[4902]: E0121 16:03:40.294950 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295072 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295206 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.295244 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.397888 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398085 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398448 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.398660 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.399144 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.418105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"keystone-fd43-account-create-update-f6bm7\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.418167 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"keystone-db-create-9fbzk\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.475245 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:40 crc kubenswrapper[4902]: I0121 16:03:40.561344 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:41 crc kubenswrapper[4902]: W0121 16:03:41.073792 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd3463ca_5f37_4a7e_9f53_c32f2abe3502.slice/crio-f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289 WatchSource:0}: Error finding container f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289: Status 404 returned error can't find the container with id f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289 Jan 21 16:03:41 crc kubenswrapper[4902]: I0121 16:03:41.076007 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:03:41 crc kubenswrapper[4902]: I0121 16:03:41.207442 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:03:41 crc kubenswrapper[4902]: W0121 16:03:41.210417 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b9f6374_66c7_4124_b410_c5d60c8f0d6b.slice/crio-3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050 WatchSource:0}: Error finding container 3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050: Status 404 returned error can't find the container with id 3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050 Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.087884 4902 generic.go:334] "Generic (PLEG): container finished" podID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerID="994f6f05fed4b0e62e48fa8578c2ecb21f387018408d5954555b07ebf19b3b49" exitCode=0 Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.087949 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd43-account-create-update-f6bm7" event={"ID":"0b9f6374-66c7-4124-b410-c5d60c8f0d6b","Type":"ContainerDied","Data":"994f6f05fed4b0e62e48fa8578c2ecb21f387018408d5954555b07ebf19b3b49"} Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.088262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd43-account-create-update-f6bm7" event={"ID":"0b9f6374-66c7-4124-b410-c5d60c8f0d6b","Type":"ContainerStarted","Data":"3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050"} Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.090512 4902 generic.go:334] "Generic (PLEG): container finished" podID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerID="94e5637468147f71d442912ca57ee6a969ce1c74828b8408d61b57b6d26eda33" exitCode=0 Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.090557 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9fbzk" event={"ID":"dd3463ca-5f37-4a7e-9f53-c32f2abe3502","Type":"ContainerDied","Data":"94e5637468147f71d442912ca57ee6a969ce1c74828b8408d61b57b6d26eda33"} Jan 21 16:03:42 crc kubenswrapper[4902]: I0121 16:03:42.090590 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9fbzk" event={"ID":"dd3463ca-5f37-4a7e-9f53-c32f2abe3502","Type":"ContainerStarted","Data":"f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289"} Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.473812 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.553998 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") pod \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.554132 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") pod \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\" (UID: \"0b9f6374-66c7-4124-b410-c5d60c8f0d6b\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.555148 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b9f6374-66c7-4124-b410-c5d60c8f0d6b" (UID: "0b9f6374-66c7-4124-b410-c5d60c8f0d6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.560648 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q" (OuterVolumeSpecName: "kube-api-access-29s8q") pod "0b9f6374-66c7-4124-b410-c5d60c8f0d6b" (UID: "0b9f6374-66c7-4124-b410-c5d60c8f0d6b"). InnerVolumeSpecName "kube-api-access-29s8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.561817 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.655889 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") pod \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.656055 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") pod \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\" (UID: \"dd3463ca-5f37-4a7e-9f53-c32f2abe3502\") " Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.656388 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29s8q\" (UniqueName: \"kubernetes.io/projected/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-kube-api-access-29s8q\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.656402 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9f6374-66c7-4124-b410-c5d60c8f0d6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.657145 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd3463ca-5f37-4a7e-9f53-c32f2abe3502" (UID: "dd3463ca-5f37-4a7e-9f53-c32f2abe3502"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.659602 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9" (OuterVolumeSpecName: "kube-api-access-zskk9") pod "dd3463ca-5f37-4a7e-9f53-c32f2abe3502" (UID: "dd3463ca-5f37-4a7e-9f53-c32f2abe3502"). InnerVolumeSpecName "kube-api-access-zskk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.757775 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zskk9\" (UniqueName: \"kubernetes.io/projected/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-kube-api-access-zskk9\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4902]: I0121 16:03:43.757813 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd3463ca-5f37-4a7e-9f53-c32f2abe3502-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.108143 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9fbzk" event={"ID":"dd3463ca-5f37-4a7e-9f53-c32f2abe3502","Type":"ContainerDied","Data":"f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289"} Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.108595 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b8eb4601e0c26c26e896dc3a959fad15e60b9c8ee5c1b2c51c4659ff955289" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.108186 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9fbzk" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.110238 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd43-account-create-update-f6bm7" event={"ID":"0b9f6374-66c7-4124-b410-c5d60c8f0d6b","Type":"ContainerDied","Data":"3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050"} Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.110277 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3854c16cd249a08d3a0c5a72df006174d88fc50b746387702607178c4b097050" Jan 21 16:03:44 crc kubenswrapper[4902]: I0121 16:03:44.110347 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd43-account-create-update-f6bm7" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.691125 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:03:45 crc kubenswrapper[4902]: E0121 16:03:45.715774 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerName="mariadb-account-create-update" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.715821 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerName="mariadb-account-create-update" Jan 21 16:03:45 crc kubenswrapper[4902]: E0121 16:03:45.715874 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerName="mariadb-database-create" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.715886 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerName="mariadb-database-create" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.716600 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" containerName="mariadb-database-create" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.716627 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" containerName="mariadb-account-create-update" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.717447 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.717571 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720519 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720655 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720687 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.720804 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.792009 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.792084 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.792114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.893269 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.893322 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.893349 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.899925 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.901001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:45 crc kubenswrapper[4902]: I0121 16:03:45.911256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"keystone-db-sync-m6jz2\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:46 crc kubenswrapper[4902]: I0121 16:03:46.055119 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:46 crc kubenswrapper[4902]: I0121 16:03:46.469668 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:03:46 crc kubenswrapper[4902]: W0121 16:03:46.481321 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod072d9d46_6930_490e_9561_cd7e75f05451.slice/crio-8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a WatchSource:0}: Error finding container 8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a: Status 404 returned error can't find the container with id 8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a Jan 21 16:03:47 crc kubenswrapper[4902]: I0121 16:03:47.128967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerStarted","Data":"c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced"} Jan 21 16:03:47 crc kubenswrapper[4902]: I0121 16:03:47.129319 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerStarted","Data":"8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a"} Jan 21 16:03:47 crc kubenswrapper[4902]: I0121 16:03:47.149194 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-m6jz2" podStartSLOduration=2.149170179 podStartE2EDuration="2.149170179s" podCreationTimestamp="2026-01-21 16:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:47.144305723 +0000 UTC m=+5389.221138812" watchObservedRunningTime="2026-01-21 16:03:47.149170179 +0000 UTC m=+5389.226003238" Jan 21 16:03:49 crc kubenswrapper[4902]: I0121 16:03:49.146597 4902 generic.go:334] "Generic (PLEG): container finished" podID="072d9d46-6930-490e-9561-cd7e75f05451" containerID="c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced" exitCode=0 Jan 21 16:03:49 crc kubenswrapper[4902]: I0121 16:03:49.146667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerDied","Data":"c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced"} Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.521253 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.566849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") pod \"072d9d46-6930-490e-9561-cd7e75f05451\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.567103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") pod \"072d9d46-6930-490e-9561-cd7e75f05451\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.567134 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") pod \"072d9d46-6930-490e-9561-cd7e75f05451\" (UID: \"072d9d46-6930-490e-9561-cd7e75f05451\") " Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.577101 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq" (OuterVolumeSpecName: "kube-api-access-lfdwq") pod "072d9d46-6930-490e-9561-cd7e75f05451" (UID: "072d9d46-6930-490e-9561-cd7e75f05451"). InnerVolumeSpecName "kube-api-access-lfdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.597596 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "072d9d46-6930-490e-9561-cd7e75f05451" (UID: "072d9d46-6930-490e-9561-cd7e75f05451"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.623459 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.626245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data" (OuterVolumeSpecName: "config-data") pod "072d9d46-6930-490e-9561-cd7e75f05451" (UID: "072d9d46-6930-490e-9561-cd7e75f05451"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.681173 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfdwq\" (UniqueName: \"kubernetes.io/projected/072d9d46-6930-490e-9561-cd7e75f05451-kube-api-access-lfdwq\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.681210 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:50 crc kubenswrapper[4902]: I0121 16:03:50.681223 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/072d9d46-6930-490e-9561-cd7e75f05451-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.167451 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-m6jz2" event={"ID":"072d9d46-6930-490e-9561-cd7e75f05451","Type":"ContainerDied","Data":"8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a"} Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.167504 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8692db0225eac7ec321d256533dacda40d0bb2ca0ad4c89595a09a8bdbba897a" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.167557 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-m6jz2" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.426006 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:03:51 crc kubenswrapper[4902]: E0121 16:03:51.426413 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072d9d46-6930-490e-9561-cd7e75f05451" containerName="keystone-db-sync" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.426430 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="072d9d46-6930-490e-9561-cd7e75f05451" containerName="keystone-db-sync" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.426588 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="072d9d46-6930-490e-9561-cd7e75f05451" containerName="keystone-db-sync" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.427401 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.446977 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.470454 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.471819 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.474777 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475186 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475287 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475348 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.475366 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.482023 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.494912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.494978 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.495033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.495106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.495135 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597429 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597551 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.597639 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.598105 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.598478 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599008 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599127 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599375 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.599416 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.601845 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.601930 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.602777 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.603073 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.618851 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"dnsmasq-dns-b4c895fc-lcmhk\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704255 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704710 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704755 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704788 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.704823 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.710015 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.711987 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.712530 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.713842 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.720186 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.722369 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"keystone-bootstrap-vmkwm\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.744639 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:51 crc kubenswrapper[4902]: I0121 16:03:51.790915 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:52 crc kubenswrapper[4902]: I0121 16:03:52.206307 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:03:52 crc kubenswrapper[4902]: W0121 16:03:52.209084 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2ab6913_fdd0_4944_8c16_c213aecdd825.slice/crio-f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe WatchSource:0}: Error finding container f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe: Status 404 returned error can't find the container with id f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe Jan 21 16:03:52 crc kubenswrapper[4902]: I0121 16:03:52.333136 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.183774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerDied","Data":"c548aa5ba6d350e77b6beec3d64af186cf452dd8633be8614338761c7800ca06"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.184907 4902 generic.go:334] "Generic (PLEG): container finished" podID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerID="c548aa5ba6d350e77b6beec3d64af186cf452dd8633be8614338761c7800ca06" exitCode=0 Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.185066 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerStarted","Data":"f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.187379 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerStarted","Data":"aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.187467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerStarted","Data":"841a20b7c5a423f1f6ce2baa8b54da8b3caff167a48c142e143e37a1664a974a"} Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.264879 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vmkwm" podStartSLOduration=2.264852249 podStartE2EDuration="2.264852249s" podCreationTimestamp="2026-01-21 16:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:53.261709051 +0000 UTC m=+5395.338542110" watchObservedRunningTime="2026-01-21 16:03:53.264852249 +0000 UTC m=+5395.341685278" Jan 21 16:03:53 crc kubenswrapper[4902]: I0121 16:03:53.294694 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:03:53 crc kubenswrapper[4902]: E0121 16:03:53.294979 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:03:54 crc kubenswrapper[4902]: I0121 16:03:54.197867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerStarted","Data":"d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0"} Jan 21 16:03:54 crc kubenswrapper[4902]: I0121 16:03:54.198613 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:03:54 crc kubenswrapper[4902]: I0121 16:03:54.222996 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" podStartSLOduration=3.222963775 podStartE2EDuration="3.222963775s" podCreationTimestamp="2026-01-21 16:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:54.212977326 +0000 UTC m=+5396.289810365" watchObservedRunningTime="2026-01-21 16:03:54.222963775 +0000 UTC m=+5396.299796844" Jan 21 16:03:57 crc kubenswrapper[4902]: I0121 16:03:57.223618 4902 generic.go:334] "Generic (PLEG): container finished" podID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerID="aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e" exitCode=0 Jan 21 16:03:57 crc kubenswrapper[4902]: I0121 16:03:57.223677 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerDied","Data":"aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e"} Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.605603 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728577 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728613 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728714 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.728827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") pod \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\" (UID: \"986c3cfd-00c0-4c5f-a3af-ef42bb380140\") " Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.734803 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99" (OuterVolumeSpecName: "kube-api-access-qtn99") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "kube-api-access-qtn99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.735480 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.735864 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.737338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts" (OuterVolumeSpecName: "scripts") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.755818 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data" (OuterVolumeSpecName: "config-data") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.762177 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "986c3cfd-00c0-4c5f-a3af-ef42bb380140" (UID: "986c3cfd-00c0-4c5f-a3af-ef42bb380140"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831138 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831187 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831198 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831208 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831216 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/986c3cfd-00c0-4c5f-a3af-ef42bb380140-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:58 crc kubenswrapper[4902]: I0121 16:03:58.831224 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtn99\" (UniqueName: \"kubernetes.io/projected/986c3cfd-00c0-4c5f-a3af-ef42bb380140-kube-api-access-qtn99\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.239573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vmkwm" event={"ID":"986c3cfd-00c0-4c5f-a3af-ef42bb380140","Type":"ContainerDied","Data":"841a20b7c5a423f1f6ce2baa8b54da8b3caff167a48c142e143e37a1664a974a"} Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.239614 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841a20b7c5a423f1f6ce2baa8b54da8b3caff167a48c142e143e37a1664a974a" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.239685 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vmkwm" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.328019 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.334342 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vmkwm"] Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.414420 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:03:59 crc kubenswrapper[4902]: E0121 16:03:59.414744 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerName="keystone-bootstrap" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.414762 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerName="keystone-bootstrap" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.414974 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" containerName="keystone-bootstrap" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.415665 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425403 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425708 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425609 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.425873 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.429411 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.432910 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541290 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541401 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541444 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541667 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.541935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.643867 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.643994 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644110 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644156 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644184 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.644231 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.649753 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.650200 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.651918 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.653034 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.653552 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.664658 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"keystone-bootstrap-clvkp\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:03:59 crc kubenswrapper[4902]: I0121 16:03:59.736697 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.032978 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.270224 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerStarted","Data":"8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398"} Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.270631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerStarted","Data":"d409881154f9d8385023276daaa4e4cc4b728edd944f8b0811375cdf56503acc"} Jan 21 16:04:00 crc kubenswrapper[4902]: I0121 16:04:00.304693 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986c3cfd-00c0-4c5f-a3af-ef42bb380140" path="/var/lib/kubelet/pods/986c3cfd-00c0-4c5f-a3af-ef42bb380140/volumes" Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.746200 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.761757 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-clvkp" podStartSLOduration=2.761736286 podStartE2EDuration="2.761736286s" podCreationTimestamp="2026-01-21 16:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:00.289996253 +0000 UTC m=+5402.366829282" watchObservedRunningTime="2026-01-21 16:04:01.761736286 +0000 UTC m=+5403.838569315" Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.807994 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:04:01 crc kubenswrapper[4902]: I0121 16:04:01.808573 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57f688859c-fb82z" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" containerID="cri-o://401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13" gracePeriod=10 Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.295997 4902 generic.go:334] "Generic (PLEG): container finished" podID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerID="401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13" exitCode=0 Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.303541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerDied","Data":"401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13"} Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.303589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f688859c-fb82z" event={"ID":"fdfebc8b-bc5c-4214-acee-021a404994bf","Type":"ContainerDied","Data":"f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0"} Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.303604 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c03b700062d45894766c079b31174948f641c82af2f122619825aabb3684d0" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.305606 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406648 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406739 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406850 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.406884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") pod \"fdfebc8b-bc5c-4214-acee-021a404994bf\" (UID: \"fdfebc8b-bc5c-4214-acee-021a404994bf\") " Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.412670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t" (OuterVolumeSpecName: "kube-api-access-7b77t") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "kube-api-access-7b77t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.446847 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.449446 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.456959 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config" (OuterVolumeSpecName: "config") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.460223 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdfebc8b-bc5c-4214-acee-021a404994bf" (UID: "fdfebc8b-bc5c-4214-acee-021a404994bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.508962 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509000 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509010 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509021 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdfebc8b-bc5c-4214-acee-021a404994bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4902]: I0121 16:04:02.509032 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b77t\" (UniqueName: \"kubernetes.io/projected/fdfebc8b-bc5c-4214-acee-021a404994bf-kube-api-access-7b77t\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.309423 4902 generic.go:334] "Generic (PLEG): container finished" podID="663c22ab-26c3-4d29-8965-255dc095eef2" containerID="8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398" exitCode=0 Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.309476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerDied","Data":"8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398"} Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.309795 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f688859c-fb82z" Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.360882 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:04:03 crc kubenswrapper[4902]: I0121 16:04:03.368473 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57f688859c-fb82z"] Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.302774 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" path="/var/lib/kubelet/pods/fdfebc8b-bc5c-4214-acee-021a404994bf/volumes" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.629520 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748708 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748860 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748906 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.748939 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.749066 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") pod \"663c22ab-26c3-4d29-8965-255dc095eef2\" (UID: \"663c22ab-26c3-4d29-8965-255dc095eef2\") " Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.758861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh" (OuterVolumeSpecName: "kube-api-access-dd8wh") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "kube-api-access-dd8wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.760301 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.760333 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts" (OuterVolumeSpecName: "scripts") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.760794 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.782862 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.786617 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data" (OuterVolumeSpecName: "config-data") pod "663c22ab-26c3-4d29-8965-255dc095eef2" (UID: "663c22ab-26c3-4d29-8965-255dc095eef2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851524 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851566 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851580 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851593 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8wh\" (UniqueName: \"kubernetes.io/projected/663c22ab-26c3-4d29-8965-255dc095eef2-kube-api-access-dd8wh\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851604 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:04 crc kubenswrapper[4902]: I0121 16:04:04.851613 4902 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/663c22ab-26c3-4d29-8965-255dc095eef2-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.323399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-clvkp" event={"ID":"663c22ab-26c3-4d29-8965-255dc095eef2","Type":"ContainerDied","Data":"d409881154f9d8385023276daaa4e4cc4b728edd944f8b0811375cdf56503acc"} Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.323440 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d409881154f9d8385023276daaa4e4cc4b728edd944f8b0811375cdf56503acc" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.323461 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-clvkp" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.408765 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67bfc4c47-flndt"] Jan 21 16:04:05 crc kubenswrapper[4902]: E0121 16:04:05.409166 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" containerName="keystone-bootstrap" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409187 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" containerName="keystone-bootstrap" Jan 21 16:04:05 crc kubenswrapper[4902]: E0121 16:04:05.409201 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="init" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409209 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="init" Jan 21 16:04:05 crc kubenswrapper[4902]: E0121 16:04:05.409236 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409244 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409420 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfebc8b-bc5c-4214-acee-021a404994bf" containerName="dnsmasq-dns" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.409442 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" containerName="keystone-bootstrap" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.410139 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.412836 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.412843 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.412958 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.413496 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.413498 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.413755 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-92pp7" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.427197 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67bfc4c47-flndt"] Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565337 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-config-data\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-scripts\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565668 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqv86\" (UniqueName: \"kubernetes.io/projected/1bc7e490-49b1-4eef-ab29-4453235cf752-kube-api-access-kqv86\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565752 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-combined-ca-bundle\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565841 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-internal-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.565923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-public-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.566147 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-credential-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.566288 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-fernet-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667403 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-fernet-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667478 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-config-data\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-scripts\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqv86\" (UniqueName: \"kubernetes.io/projected/1bc7e490-49b1-4eef-ab29-4453235cf752-kube-api-access-kqv86\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667557 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-combined-ca-bundle\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667591 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-internal-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667620 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-public-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.667643 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-credential-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.671362 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-credential-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.671475 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-public-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.672449 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-scripts\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.673055 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-config-data\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.673829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-fernet-keys\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.675346 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-combined-ca-bundle\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.676473 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bc7e490-49b1-4eef-ab29-4453235cf752-internal-tls-certs\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.684775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqv86\" (UniqueName: \"kubernetes.io/projected/1bc7e490-49b1-4eef-ab29-4453235cf752-kube-api-access-kqv86\") pod \"keystone-67bfc4c47-flndt\" (UID: \"1bc7e490-49b1-4eef-ab29-4453235cf752\") " pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:05 crc kubenswrapper[4902]: I0121 16:04:05.764475 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:06 crc kubenswrapper[4902]: I0121 16:04:06.193741 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67bfc4c47-flndt"] Jan 21 16:04:06 crc kubenswrapper[4902]: W0121 16:04:06.197478 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bc7e490_49b1_4eef_ab29_4453235cf752.slice/crio-2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e WatchSource:0}: Error finding container 2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e: Status 404 returned error can't find the container with id 2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e Jan 21 16:04:06 crc kubenswrapper[4902]: I0121 16:04:06.333478 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67bfc4c47-flndt" event={"ID":"1bc7e490-49b1-4eef-ab29-4453235cf752","Type":"ContainerStarted","Data":"2b5a54b234682175cc1ef1f64c55ed18ff50fdcf429befb210b8ea2f3117936e"} Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.294991 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:07 crc kubenswrapper[4902]: E0121 16:04:07.295625 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.342666 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67bfc4c47-flndt" event={"ID":"1bc7e490-49b1-4eef-ab29-4453235cf752","Type":"ContainerStarted","Data":"f1346ef846aac7fea42a88a6bdd4bb7ec6ffb6acdf430d21727340e0fbaa8000"} Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.342794 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:07 crc kubenswrapper[4902]: I0121 16:04:07.366692 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67bfc4c47-flndt" podStartSLOduration=2.366657859 podStartE2EDuration="2.366657859s" podCreationTimestamp="2026-01-21 16:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:07.363160171 +0000 UTC m=+5409.439993210" watchObservedRunningTime="2026-01-21 16:04:07.366657859 +0000 UTC m=+5409.443490888" Jan 21 16:04:19 crc kubenswrapper[4902]: I0121 16:04:19.295887 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:19 crc kubenswrapper[4902]: E0121 16:04:19.296945 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:04:32 crc kubenswrapper[4902]: I0121 16:04:32.295524 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:32 crc kubenswrapper[4902]: E0121 16:04:32.296531 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:04:37 crc kubenswrapper[4902]: I0121 16:04:37.346285 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-67bfc4c47-flndt" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.169430 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.170785 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.172763 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.173442 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-p6shw" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.173998 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.180279 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260438 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260803 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260851 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.260902 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363076 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363162 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363298 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.363317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.364836 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.386080 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.409097 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.409638 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"openstackclient\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.491683 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:04:40 crc kubenswrapper[4902]: I0121 16:04:40.953728 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:04:41 crc kubenswrapper[4902]: I0121 16:04:41.633436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f901a0e2-6941-4d4e-a90a-2905acf87521","Type":"ContainerStarted","Data":"0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1"} Jan 21 16:04:41 crc kubenswrapper[4902]: I0121 16:04:41.633740 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f901a0e2-6941-4d4e-a90a-2905acf87521","Type":"ContainerStarted","Data":"1e1d0a1b83d0024d201e7ee3eaa5897f636699fac45162048a3d139fcb0fa621"} Jan 21 16:04:41 crc kubenswrapper[4902]: I0121 16:04:41.664309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.66427788 podStartE2EDuration="1.66427788s" podCreationTimestamp="2026-01-21 16:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:41.652783608 +0000 UTC m=+5443.729616697" watchObservedRunningTime="2026-01-21 16:04:41.66427788 +0000 UTC m=+5443.741110949" Jan 21 16:04:46 crc kubenswrapper[4902]: I0121 16:04:46.295752 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:04:46 crc kubenswrapper[4902]: E0121 16:04:46.296365 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:00 crc kubenswrapper[4902]: I0121 16:05:00.295646 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:00 crc kubenswrapper[4902]: E0121 16:05:00.296844 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:12 crc kubenswrapper[4902]: I0121 16:05:12.295271 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:12 crc kubenswrapper[4902]: E0121 16:05:12.296577 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:26 crc kubenswrapper[4902]: I0121 16:05:26.295207 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:26 crc kubenswrapper[4902]: E0121 16:05:26.295966 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:39 crc kubenswrapper[4902]: I0121 16:05:39.294846 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:39 crc kubenswrapper[4902]: E0121 16:05:39.295925 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:05:51 crc kubenswrapper[4902]: I0121 16:05:51.294773 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:05:51 crc kubenswrapper[4902]: E0121 16:05:51.297601 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:02 crc kubenswrapper[4902]: I0121 16:06:02.297664 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:02 crc kubenswrapper[4902]: E0121 16:06:02.299314 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.294768 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:16 crc kubenswrapper[4902]: E0121 16:06:16.295624 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.726306 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.727525 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.740300 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.748903 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.749946 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.752098 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.757438 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.865814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.865921 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.866103 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.866212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967344 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967433 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.967513 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.968307 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.968805 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.987641 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"barbican-0136-account-create-update-k4cmq\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:16 crc kubenswrapper[4902]: I0121 16:06:16.991090 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"barbican-db-create-85k9w\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.047657 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.065525 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.491683 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:06:17 crc kubenswrapper[4902]: I0121 16:06:17.570573 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:06:17 crc kubenswrapper[4902]: W0121 16:06:17.580606 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc6e2c0_6737_49e0_b5d8_f77a5de0a7f8.slice/crio-b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef WatchSource:0}: Error finding container b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef: Status 404 returned error can't find the container with id b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.452903 4902 generic.go:334] "Generic (PLEG): container finished" podID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerID="c71cec8eacda47056c7a215f2b04bc9d493e2cbfdf871841495ef07bfb7eb7a5" exitCode=0 Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.452992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0136-account-create-update-k4cmq" event={"ID":"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8","Type":"ContainerDied","Data":"c71cec8eacda47056c7a215f2b04bc9d493e2cbfdf871841495ef07bfb7eb7a5"} Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.453277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0136-account-create-update-k4cmq" event={"ID":"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8","Type":"ContainerStarted","Data":"b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef"} Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.455227 4902 generic.go:334] "Generic (PLEG): container finished" podID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerID="e7ae920f7061533fd1ae5c5eabfd18124e9c27f0aad7594a5b9ba20211753b38" exitCode=0 Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.455259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85k9w" event={"ID":"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06","Type":"ContainerDied","Data":"e7ae920f7061533fd1ae5c5eabfd18124e9c27f0aad7594a5b9ba20211753b38"} Jan 21 16:06:18 crc kubenswrapper[4902]: I0121 16:06:18.455277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85k9w" event={"ID":"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06","Type":"ContainerStarted","Data":"134c81f0abf8e37719a1daae08481f6ead7f458acd92a02ec1a2553905e643b7"} Jan 21 16:06:19 crc kubenswrapper[4902]: I0121 16:06:19.883711 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:19 crc kubenswrapper[4902]: I0121 16:06:19.890779 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023086 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") pod \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023134 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") pod \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023162 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") pod \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\" (UID: \"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023298 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") pod \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\" (UID: \"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06\") " Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023600 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" (UID: "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.023833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" (UID: "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.028724 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk" (OuterVolumeSpecName: "kube-api-access-dm9sk") pod "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" (UID: "e8c6f518-fd8b-4c60-9f36-1eb57bd30b06"). InnerVolumeSpecName "kube-api-access-dm9sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.029569 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc" (OuterVolumeSpecName: "kube-api-access-76jlc") pod "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" (UID: "edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8"). InnerVolumeSpecName "kube-api-access-76jlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125263 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm9sk\" (UniqueName: \"kubernetes.io/projected/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-kube-api-access-dm9sk\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125312 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125321 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.125342 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76jlc\" (UniqueName: \"kubernetes.io/projected/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8-kube-api-access-76jlc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.490263 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0136-account-create-update-k4cmq" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.490255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0136-account-create-update-k4cmq" event={"ID":"edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8","Type":"ContainerDied","Data":"b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef"} Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.491293 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c7a3a5979b7a774d7759ed19e39816ae89d88b76761b46d2df20ec67b763ef" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.493513 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85k9w" event={"ID":"e8c6f518-fd8b-4c60-9f36-1eb57bd30b06","Type":"ContainerDied","Data":"134c81f0abf8e37719a1daae08481f6ead7f458acd92a02ec1a2553905e643b7"} Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.493539 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="134c81f0abf8e37719a1daae08481f6ead7f458acd92a02ec1a2553905e643b7" Jan 21 16:06:20 crc kubenswrapper[4902]: I0121 16:06:20.493601 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85k9w" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029091 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:06:22 crc kubenswrapper[4902]: E0121 16:06:22.029504 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerName="mariadb-database-create" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029524 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerName="mariadb-database-create" Jan 21 16:06:22 crc kubenswrapper[4902]: E0121 16:06:22.029567 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerName="mariadb-account-create-update" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029575 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerName="mariadb-account-create-update" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029753 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" containerName="mariadb-account-create-update" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.029773 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" containerName="mariadb-database-create" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.030441 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.034685 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b64cz" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.034886 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.063448 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.169163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.169226 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.169283 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.270818 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.270881 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.271005 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.275645 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.276840 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.291604 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"barbican-db-sync-7k4p6\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.375693 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:22 crc kubenswrapper[4902]: I0121 16:06:22.619089 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:06:22 crc kubenswrapper[4902]: W0121 16:06:22.633770 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58a9fed4_e340_4ac7_a3a6_750ce7aa3ad2.slice/crio-497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb WatchSource:0}: Error finding container 497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb: Status 404 returned error can't find the container with id 497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb Jan 21 16:06:23 crc kubenswrapper[4902]: I0121 16:06:23.516058 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerStarted","Data":"65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257"} Jan 21 16:06:23 crc kubenswrapper[4902]: I0121 16:06:23.516364 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerStarted","Data":"497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb"} Jan 21 16:06:23 crc kubenswrapper[4902]: I0121 16:06:23.536118 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7k4p6" podStartSLOduration=2.53610071 podStartE2EDuration="2.53610071s" podCreationTimestamp="2026-01-21 16:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:23.529092333 +0000 UTC m=+5545.605925372" watchObservedRunningTime="2026-01-21 16:06:23.53610071 +0000 UTC m=+5545.612933739" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.528518 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.530309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.541192 4902 generic.go:334] "Generic (PLEG): container finished" podID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerID="65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257" exitCode=0 Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.541234 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerDied","Data":"65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257"} Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.547404 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.625933 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.626013 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.626099 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.727917 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.728029 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.728117 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.728955 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.729241 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.756916 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"redhat-operators-lqndd\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:25 crc kubenswrapper[4902]: I0121 16:06:25.855495 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:26 crc kubenswrapper[4902]: I0121 16:06:26.789342 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:26 crc kubenswrapper[4902]: W0121 16:06:26.807229 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd543be_03fc_4a61_bb0b_55a066361a5f.slice/crio-ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce WatchSource:0}: Error finding container ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce: Status 404 returned error can't find the container with id ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce Jan 21 16:06:26 crc kubenswrapper[4902]: I0121 16:06:26.998741 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.150214 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") pod \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.150292 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") pod \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.150427 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") pod \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\" (UID: \"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2\") " Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.157861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" (UID: "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.158338 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc" (OuterVolumeSpecName: "kube-api-access-d7nnc") pod "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" (UID: "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2"). InnerVolumeSpecName "kube-api-access-d7nnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.195226 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" (UID: "58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.252219 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.252263 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.252276 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7nnc\" (UniqueName: \"kubernetes.io/projected/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2-kube-api-access-d7nnc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.558907 4902 generic.go:334] "Generic (PLEG): container finished" podID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerID="c1974a18bb600f84ad592fcb1ec0fc601ac073e08d0d02562db2c3da418aff99" exitCode=0 Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.558971 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"c1974a18bb600f84ad592fcb1ec0fc601ac073e08d0d02562db2c3da418aff99"} Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.559034 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerStarted","Data":"ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce"} Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.561182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7k4p6" event={"ID":"58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2","Type":"ContainerDied","Data":"497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb"} Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.561212 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7k4p6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.561220 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497d95fbf98203bf6a8c356b922e547bcb6fb481a1e1214efc82b5a776c64feb" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.728431 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c94b5b747-nxfg6"] Jan 21 16:06:27 crc kubenswrapper[4902]: E0121 16:06:27.728864 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerName="barbican-db-sync" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.728882 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerName="barbican-db-sync" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.729112 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" containerName="barbican-db-sync" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.731944 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.734419 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b64cz" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.734598 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.738223 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.753254 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c94b5b747-nxfg6"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.775248 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8458cc5fd6-z5j6z"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.777764 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.786289 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.807840 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8458cc5fd6-z5j6z"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.824310 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.830262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862764 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data-custom\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862833 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data-custom\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862855 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-combined-ca-bundle\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862874 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-combined-ca-bundle\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862909 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-logs\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzcd\" (UniqueName: \"kubernetes.io/projected/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-kube-api-access-5dzcd\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.862999 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.863023 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cef3f6-598c-483e-b2b6-bb3d2942f18e-logs\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.863054 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vddm\" (UniqueName: \"kubernetes.io/projected/95cef3f6-598c-483e-b2b6-bb3d2942f18e-kube-api-access-9vddm\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.867854 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.926178 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.927585 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.930120 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.933941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.965924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data-custom\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966589 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-combined-ca-bundle\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966618 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-combined-ca-bundle\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966649 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966675 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966703 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-logs\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966741 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzcd\" (UniqueName: \"kubernetes.io/projected/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-kube-api-access-5dzcd\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966823 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966861 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966895 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cef3f6-598c-483e-b2b6-bb3d2942f18e-logs\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966915 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vddm\" (UniqueName: \"kubernetes.io/projected/95cef3f6-598c-483e-b2b6-bb3d2942f18e-kube-api-access-9vddm\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966951 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.966981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data-custom\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.968270 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cef3f6-598c-483e-b2b6-bb3d2942f18e-logs\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.969650 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-logs\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.972431 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-combined-ca-bundle\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.973661 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-combined-ca-bundle\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.974161 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.975806 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.978161 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cef3f6-598c-483e-b2b6-bb3d2942f18e-config-data-custom\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.979691 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-config-data-custom\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.984099 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vddm\" (UniqueName: \"kubernetes.io/projected/95cef3f6-598c-483e-b2b6-bb3d2942f18e-kube-api-access-9vddm\") pod \"barbican-keystone-listener-8458cc5fd6-z5j6z\" (UID: \"95cef3f6-598c-483e-b2b6-bb3d2942f18e\") " pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:27 crc kubenswrapper[4902]: I0121 16:06:27.986551 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzcd\" (UniqueName: \"kubernetes.io/projected/9162d3ad-8f1a-4998-9f4d-a1869af6a23f-kube-api-access-5dzcd\") pod \"barbican-worker-c94b5b747-nxfg6\" (UID: \"9162d3ad-8f1a-4998-9f4d-a1869af6a23f\") " pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.069207 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.069885 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.069921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070065 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070154 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070274 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070399 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070456 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.070613 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071193 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071385 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.071398 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.075689 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c94b5b747-nxfg6" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.099324 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"dnsmasq-dns-85d446946c-gb4r2\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.132291 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.165371 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172477 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172637 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.172656 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.177152 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.181660 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.182087 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.185663 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.205121 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"barbican-api-659b467f5b-b29gg\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.251851 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.532171 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c94b5b747-nxfg6"] Jan 21 16:06:28 crc kubenswrapper[4902]: W0121 16:06:28.545669 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9162d3ad_8f1a_4998_9f4d_a1869af6a23f.slice/crio-13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78 WatchSource:0}: Error finding container 13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78: Status 404 returned error can't find the container with id 13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78 Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.589878 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c94b5b747-nxfg6" event={"ID":"9162d3ad-8f1a-4998-9f4d-a1869af6a23f","Type":"ContainerStarted","Data":"13073fb5597aabcc0be45afe7891278491e8fe551edd5d8f36d37628cbf3ad78"} Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.672925 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8458cc5fd6-z5j6z"] Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.740357 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:06:28 crc kubenswrapper[4902]: I0121 16:06:28.819826 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.296569 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:29 crc kubenswrapper[4902]: E0121 16:06:29.297071 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.602687 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerID="78e0f5562314520f841b7ea0877c38f4f434c16d4495c8580ea2c10f6698660a" exitCode=0 Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.603485 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerDied","Data":"78e0f5562314520f841b7ea0877c38f4f434c16d4495c8580ea2c10f6698660a"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.605259 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerStarted","Data":"85869cd817e0c1d272d916eb419b76f6201c88f1c3e64ad5a43adcae83c81773"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.630656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c94b5b747-nxfg6" event={"ID":"9162d3ad-8f1a-4998-9f4d-a1869af6a23f","Type":"ContainerStarted","Data":"ea38d7edf84c05ec880f1c91f064fad70b15df79455e9e00e8194282fabb1f64"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.630706 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c94b5b747-nxfg6" event={"ID":"9162d3ad-8f1a-4998-9f4d-a1869af6a23f","Type":"ContainerStarted","Data":"34e5ff8f6124c4d89f1e5fd95c1c06e57d5eed2cbe37d03c28fb671f72ccaa1a"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerStarted","Data":"fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerStarted","Data":"3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634418 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerStarted","Data":"ab7459deb556f801c8bbce99eae2c2e300be05d5f0dd8719e129ec50c380cba7"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.634936 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.635073 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.646799 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c94b5b747-nxfg6" podStartSLOduration=2.646781995 podStartE2EDuration="2.646781995s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:29.644918542 +0000 UTC m=+5551.721751571" watchObservedRunningTime="2026-01-21 16:06:29.646781995 +0000 UTC m=+5551.723615024" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.649277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" event={"ID":"95cef3f6-598c-483e-b2b6-bb3d2942f18e","Type":"ContainerStarted","Data":"b49919a24663d6af6440fdfa1aede199fcb6440512ce811d7c9f7bfa99213e0d"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.649327 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" event={"ID":"95cef3f6-598c-483e-b2b6-bb3d2942f18e","Type":"ContainerStarted","Data":"fb2b0fdd1c51ff429d9fa5f4cf442c6b1046ac107d956a8562e2a03d47b8bf76"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.649345 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" event={"ID":"95cef3f6-598c-483e-b2b6-bb3d2942f18e","Type":"ContainerStarted","Data":"4af23d677cf7b1dfe1cac1e02051031118fab8aa51d79a4cebb67955df545551"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.668145 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-659b467f5b-b29gg" podStartSLOduration=2.668126465 podStartE2EDuration="2.668126465s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:29.667112966 +0000 UTC m=+5551.743945995" watchObservedRunningTime="2026-01-21 16:06:29.668126465 +0000 UTC m=+5551.744959504" Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.676547 4902 generic.go:334] "Generic (PLEG): container finished" podID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerID="b573b028c6503d871975bc29f32aef22c05a1b0aa15962dbb2b5064c028f1d54" exitCode=0 Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.676598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"b573b028c6503d871975bc29f32aef22c05a1b0aa15962dbb2b5064c028f1d54"} Jan 21 16:06:29 crc kubenswrapper[4902]: I0121 16:06:29.773137 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8458cc5fd6-z5j6z" podStartSLOduration=2.773111215 podStartE2EDuration="2.773111215s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:29.70816856 +0000 UTC m=+5551.785001589" watchObservedRunningTime="2026-01-21 16:06:29.773111215 +0000 UTC m=+5551.849944244" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.707019 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerStarted","Data":"d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150"} Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.715261 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerStarted","Data":"5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d"} Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.718528 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85645f8dd4-bf5z5"] Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.720210 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.722346 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.722381 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.734346 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85645f8dd4-bf5z5"] Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.737992 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lqndd" podStartSLOduration=3.18290342 podStartE2EDuration="5.737973255s" podCreationTimestamp="2026-01-21 16:06:25 +0000 UTC" firstStartedPulling="2026-01-21 16:06:27.560510086 +0000 UTC m=+5549.637343115" lastFinishedPulling="2026-01-21 16:06:30.115579921 +0000 UTC m=+5552.192412950" observedRunningTime="2026-01-21 16:06:30.736500164 +0000 UTC m=+5552.813333203" watchObservedRunningTime="2026-01-21 16:06:30.737973255 +0000 UTC m=+5552.814806284" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.784353 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" podStartSLOduration=3.7843367690000003 podStartE2EDuration="3.784336769s" podCreationTimestamp="2026-01-21 16:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:30.783654209 +0000 UTC m=+5552.860487238" watchObservedRunningTime="2026-01-21 16:06:30.784336769 +0000 UTC m=+5552.861169798" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.850814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2frk\" (UniqueName: \"kubernetes.io/projected/49dfaf72-0f35-4705-a9d8-830878fc46d1-kube-api-access-m2frk\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-combined-ca-bundle\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851147 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-internal-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dfaf72-0f35-4705-a9d8-830878fc46d1-logs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851267 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-public-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851363 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data-custom\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.851395 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954171 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-combined-ca-bundle\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954212 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-internal-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954243 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dfaf72-0f35-4705-a9d8-830878fc46d1-logs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954261 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-public-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954295 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data-custom\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954386 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2frk\" (UniqueName: \"kubernetes.io/projected/49dfaf72-0f35-4705-a9d8-830878fc46d1-kube-api-access-m2frk\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.954790 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dfaf72-0f35-4705-a9d8-830878fc46d1-logs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.961144 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-public-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.961698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data-custom\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.962703 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-internal-tls-certs\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.975479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-combined-ca-bundle\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.976411 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dfaf72-0f35-4705-a9d8-830878fc46d1-config-data\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:30 crc kubenswrapper[4902]: I0121 16:06:30.983081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2frk\" (UniqueName: \"kubernetes.io/projected/49dfaf72-0f35-4705-a9d8-830878fc46d1-kube-api-access-m2frk\") pod \"barbican-api-85645f8dd4-bf5z5\" (UID: \"49dfaf72-0f35-4705-a9d8-830878fc46d1\") " pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.044533 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.471720 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85645f8dd4-bf5z5"] Jan 21 16:06:31 crc kubenswrapper[4902]: W0121 16:06:31.474874 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dfaf72_0f35_4705_a9d8_830878fc46d1.slice/crio-ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a WatchSource:0}: Error finding container ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a: Status 404 returned error can't find the container with id ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.723818 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85645f8dd4-bf5z5" event={"ID":"49dfaf72-0f35-4705-a9d8-830878fc46d1","Type":"ContainerStarted","Data":"2a39058449ea7ade45f8902dd1b1ab04a92974d6273b9cf8224f9eb4f50e0ebd"} Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.723850 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85645f8dd4-bf5z5" event={"ID":"49dfaf72-0f35-4705-a9d8-830878fc46d1","Type":"ContainerStarted","Data":"ce538407c072e16f9728c02b46405dc894ea7f51826ce506d68b3e7971f3db4a"} Jan 21 16:06:31 crc kubenswrapper[4902]: I0121 16:06:31.724256 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:32 crc kubenswrapper[4902]: I0121 16:06:32.738570 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85645f8dd4-bf5z5" event={"ID":"49dfaf72-0f35-4705-a9d8-830878fc46d1","Type":"ContainerStarted","Data":"2c9d9f1101a8f35b09635f69444a56d9252c27db830d4f2642f1eb2abea8a024"} Jan 21 16:06:32 crc kubenswrapper[4902]: I0121 16:06:32.782298 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85645f8dd4-bf5z5" podStartSLOduration=2.782279955 podStartE2EDuration="2.782279955s" podCreationTimestamp="2026-01-21 16:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:32.77676867 +0000 UTC m=+5554.853601709" watchObservedRunningTime="2026-01-21 16:06:32.782279955 +0000 UTC m=+5554.859112984" Jan 21 16:06:33 crc kubenswrapper[4902]: I0121 16:06:33.745067 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:33 crc kubenswrapper[4902]: I0121 16:06:33.745147 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:35 crc kubenswrapper[4902]: I0121 16:06:35.196773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:35 crc kubenswrapper[4902]: I0121 16:06:35.857993 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:35 crc kubenswrapper[4902]: I0121 16:06:35.858358 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:36 crc kubenswrapper[4902]: I0121 16:06:36.609487 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:36 crc kubenswrapper[4902]: I0121 16:06:36.943914 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lqndd" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" probeResult="failure" output=< Jan 21 16:06:36 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:06:36 crc kubenswrapper[4902]: > Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.495826 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.537052 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85645f8dd4-bf5z5" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.604592 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.604850 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" containerID="cri-o://3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6" gracePeriod=30 Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.604994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" containerID="cri-o://fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4" gracePeriod=30 Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.613527 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": EOF" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.621781 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": EOF" Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.794719 4902 generic.go:334] "Generic (PLEG): container finished" podID="79ad32fd-7d7a-4779-87c5-093c16782962" containerID="3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6" exitCode=143 Jan 21 16:06:37 crc kubenswrapper[4902]: I0121 16:06:37.795001 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerDied","Data":"3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6"} Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.168219 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.261992 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.262304 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" containerID="cri-o://d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0" gracePeriod=10 Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830659 4902 generic.go:334] "Generic (PLEG): container finished" podID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerID="d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0" exitCode=0 Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830924 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerDied","Data":"d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0"} Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830949 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" event={"ID":"f2ab6913-fdd0-4944-8c16-c213aecdd825","Type":"ContainerDied","Data":"f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe"} Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.830961 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65c006b6d1833eb0c682b4e268cdf5aca7299197e459f4d36236b4f3229b9fe" Jan 21 16:06:38 crc kubenswrapper[4902]: I0121 16:06:38.895116 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021679 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021743 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021891 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.021994 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") pod \"f2ab6913-fdd0-4944-8c16-c213aecdd825\" (UID: \"f2ab6913-fdd0-4944-8c16-c213aecdd825\") " Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.027885 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx" (OuterVolumeSpecName: "kube-api-access-297sx") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "kube-api-access-297sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.070938 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.074919 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.092901 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config" (OuterVolumeSpecName: "config") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.105948 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2ab6913-fdd0-4944-8c16-c213aecdd825" (UID: "f2ab6913-fdd0-4944-8c16-c213aecdd825"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124234 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124260 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297sx\" (UniqueName: \"kubernetes.io/projected/f2ab6913-fdd0-4944-8c16-c213aecdd825-kube-api-access-297sx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124269 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124279 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.124287 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ab6913-fdd0-4944-8c16-c213aecdd825-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.840395 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4c895fc-lcmhk" Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.894199 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:06:39 crc kubenswrapper[4902]: I0121 16:06:39.896218 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b4c895fc-lcmhk"] Jan 21 16:06:40 crc kubenswrapper[4902]: I0121 16:06:40.305566 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" path="/var/lib/kubelet/pods/f2ab6913-fdd0-4944-8c16-c213aecdd825/volumes" Jan 21 16:06:41 crc kubenswrapper[4902]: I0121 16:06:41.294847 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:41 crc kubenswrapper[4902]: E0121 16:06:41.295229 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.001593 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": read tcp 10.217.0.2:50934->10.217.1.30:9311: read: connection reset by peer" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.001601 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": read tcp 10.217.0.2:50942->10.217.1.30:9311: read: connection reset by peer" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.254119 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": dial tcp 10.217.1.30:9311: connect: connection refused" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.254246 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659b467f5b-b29gg" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": dial tcp 10.217.1.30:9311: connect: connection refused" Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.891016 4902 generic.go:334] "Generic (PLEG): container finished" podID="79ad32fd-7d7a-4779-87c5-093c16782962" containerID="fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4" exitCode=0 Jan 21 16:06:43 crc kubenswrapper[4902]: I0121 16:06:43.891091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerDied","Data":"fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4"} Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.112658 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.125913 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126151 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126227 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126276 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") pod \"79ad32fd-7d7a-4779-87c5-093c16782962\" (UID: \"79ad32fd-7d7a-4779-87c5-093c16782962\") " Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.126924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs" (OuterVolumeSpecName: "logs") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.133181 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.137481 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv" (OuterVolumeSpecName: "kube-api-access-n6gdv") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "kube-api-access-n6gdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.193192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227526 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227552 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227561 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79ad32fd-7d7a-4779-87c5-093c16782962-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227569 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6gdv\" (UniqueName: \"kubernetes.io/projected/79ad32fd-7d7a-4779-87c5-093c16782962-kube-api-access-n6gdv\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.227637 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data" (OuterVolumeSpecName: "config-data") pod "79ad32fd-7d7a-4779-87c5-093c16782962" (UID: "79ad32fd-7d7a-4779-87c5-093c16782962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.329758 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79ad32fd-7d7a-4779-87c5-093c16782962-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.901750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659b467f5b-b29gg" event={"ID":"79ad32fd-7d7a-4779-87c5-093c16782962","Type":"ContainerDied","Data":"ab7459deb556f801c8bbce99eae2c2e300be05d5f0dd8719e129ec50c380cba7"} Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.901792 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659b467f5b-b29gg" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.901817 4902 scope.go:117] "RemoveContainer" containerID="fd2638be10a4932da8d6b26c06b5ad301fa3bce23378df3af60cb4cb40724ee4" Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.924522 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.935428 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-659b467f5b-b29gg"] Jan 21 16:06:44 crc kubenswrapper[4902]: I0121 16:06:44.937383 4902 scope.go:117] "RemoveContainer" containerID="3aa4ff5500e0f1a699c62a0b183168e55443f1a98525a6c911857d407fabc6d6" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.615988 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616325 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616337 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616352 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616358 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616377 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616383 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" Jan 21 16:06:45 crc kubenswrapper[4902]: E0121 16:06:45.616400 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="init" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616405 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="init" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616537 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ab6913-fdd0-4944-8c16-c213aecdd825" containerName="dnsmasq-dns" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616551 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.616566 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" containerName="barbican-api-log" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.617200 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.626409 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.650244 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.650608 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.716591 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.717586 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.719157 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.743838 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751643 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751691 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751764 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.751798 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.752771 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.804873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"neutron-db-create-w8j46\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.852788 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.852927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.853592 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.874680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"neutron-cb7a-account-create-update-qqdxl\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.907962 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.934100 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:45 crc kubenswrapper[4902]: I0121 16:06:45.959488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.051163 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.142446 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.306429 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ad32fd-7d7a-4779-87c5-093c16782962" path="/var/lib/kubelet/pods/79ad32fd-7d7a-4779-87c5-093c16782962/volumes" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.446733 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.544141 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:06:46 crc kubenswrapper[4902]: W0121 16:06:46.546663 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ecff7c_0bbc_47c7_82b4_fbdce132c94b.slice/crio-cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc WatchSource:0}: Error finding container cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc: Status 404 returned error can't find the container with id cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.918404 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerStarted","Data":"efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.918722 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerStarted","Data":"ed2497b8bd2c814230d43e74849565f16f3e8ac55df9e5dbe20de8f27938bd87"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.920249 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerStarted","Data":"371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.920294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerStarted","Data":"cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc"} Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.938189 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-w8j46" podStartSLOduration=1.9381693279999999 podStartE2EDuration="1.938169328s" podCreationTimestamp="2026-01-21 16:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:46.931072928 +0000 UTC m=+5569.007905947" watchObservedRunningTime="2026-01-21 16:06:46.938169328 +0000 UTC m=+5569.015002357" Jan 21 16:06:46 crc kubenswrapper[4902]: I0121 16:06:46.947865 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cb7a-account-create-update-qqdxl" podStartSLOduration=1.94784998 podStartE2EDuration="1.94784998s" podCreationTimestamp="2026-01-21 16:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:46.946633536 +0000 UTC m=+5569.023466585" watchObservedRunningTime="2026-01-21 16:06:46.94784998 +0000 UTC m=+5569.024683009" Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.930456 4902 generic.go:334] "Generic (PLEG): container finished" podID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerID="371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c" exitCode=0 Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.930516 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerDied","Data":"371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c"} Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.932967 4902 generic.go:334] "Generic (PLEG): container finished" podID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerID="efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a" exitCode=0 Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.933590 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lqndd" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" containerID="cri-o://d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150" gracePeriod=2 Jan 21 16:06:47 crc kubenswrapper[4902]: I0121 16:06:47.933020 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerDied","Data":"efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a"} Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.070251 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.076647 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qc8ct"] Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.313693 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d642b708-8313-4edd-8183-4dcd679721b6" path="/var/lib/kubelet/pods/d642b708-8313-4edd-8183-4dcd679721b6/volumes" Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.942592 4902 generic.go:334] "Generic (PLEG): container finished" podID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerID="d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150" exitCode=0 Jan 21 16:06:48 crc kubenswrapper[4902]: I0121 16:06:48.943008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.046301 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.139909 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") pod \"ddd543be-03fc-4a61-bb0b-55a066361a5f\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.140189 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") pod \"ddd543be-03fc-4a61-bb0b-55a066361a5f\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.140783 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") pod \"ddd543be-03fc-4a61-bb0b-55a066361a5f\" (UID: \"ddd543be-03fc-4a61-bb0b-55a066361a5f\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.149264 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities" (OuterVolumeSpecName: "utilities") pod "ddd543be-03fc-4a61-bb0b-55a066361a5f" (UID: "ddd543be-03fc-4a61-bb0b-55a066361a5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.150912 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q" (OuterVolumeSpecName: "kube-api-access-gqg6q") pod "ddd543be-03fc-4a61-bb0b-55a066361a5f" (UID: "ddd543be-03fc-4a61-bb0b-55a066361a5f"). InnerVolumeSpecName "kube-api-access-gqg6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.243525 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqg6q\" (UniqueName: \"kubernetes.io/projected/ddd543be-03fc-4a61-bb0b-55a066361a5f-kube-api-access-gqg6q\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.243569 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.271262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddd543be-03fc-4a61-bb0b-55a066361a5f" (UID: "ddd543be-03fc-4a61-bb0b-55a066361a5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.347588 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd543be-03fc-4a61-bb0b-55a066361a5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.361829 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.386817 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.448741 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") pod \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.448835 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") pod \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\" (UID: \"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.449666 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" (UID: "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.455841 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs" (OuterVolumeSpecName: "kube-api-access-tqbhs") pod "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" (UID: "d3ecff7c-0bbc-47c7-82b4-fbdce132c94b"). InnerVolumeSpecName "kube-api-access-tqbhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.550902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") pod \"b91136e9-5bad-4d5c-8eff-8a77985a1726\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551314 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") pod \"b91136e9-5bad-4d5c-8eff-8a77985a1726\" (UID: \"b91136e9-5bad-4d5c-8eff-8a77985a1726\") " Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551654 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqbhs\" (UniqueName: \"kubernetes.io/projected/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-kube-api-access-tqbhs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551670 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.551900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b91136e9-5bad-4d5c-8eff-8a77985a1726" (UID: "b91136e9-5bad-4d5c-8eff-8a77985a1726"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.557237 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq" (OuterVolumeSpecName: "kube-api-access-g5fbq") pod "b91136e9-5bad-4d5c-8eff-8a77985a1726" (UID: "b91136e9-5bad-4d5c-8eff-8a77985a1726"). InnerVolumeSpecName "kube-api-access-g5fbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.654707 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b91136e9-5bad-4d5c-8eff-8a77985a1726-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.654744 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fbq\" (UniqueName: \"kubernetes.io/projected/b91136e9-5bad-4d5c-8eff-8a77985a1726-kube-api-access-g5fbq\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.953522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8j46" event={"ID":"b91136e9-5bad-4d5c-8eff-8a77985a1726","Type":"ContainerDied","Data":"ed2497b8bd2c814230d43e74849565f16f3e8ac55df9e5dbe20de8f27938bd87"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.953563 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2497b8bd2c814230d43e74849565f16f3e8ac55df9e5dbe20de8f27938bd87" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.953631 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8j46" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.956440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lqndd" event={"ID":"ddd543be-03fc-4a61-bb0b-55a066361a5f","Type":"ContainerDied","Data":"ff9be32e5e25b980d1ae27fea132e376859f3819b5b41cfaa777f65b307f07ce"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.956477 4902 scope.go:117] "RemoveContainer" containerID="d5c168b8bfc82e8b469571ae78e3766b5afadc9dedc2c3dfaca0d6e58daff150" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.956605 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lqndd" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.972072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb7a-account-create-update-qqdxl" event={"ID":"d3ecff7c-0bbc-47c7-82b4-fbdce132c94b","Type":"ContainerDied","Data":"cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc"} Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.972104 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9b3facb086736d2fafbcc6a490312b82234f09a075436f35f7597a70f964dc" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.972129 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb7a-account-create-update-qqdxl" Jan 21 16:06:49 crc kubenswrapper[4902]: I0121 16:06:49.996242 4902 scope.go:117] "RemoveContainer" containerID="b573b028c6503d871975bc29f32aef22c05a1b0aa15962dbb2b5064c028f1d54" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.008240 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.017780 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lqndd"] Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.018261 4902 scope.go:117] "RemoveContainer" containerID="c1974a18bb600f84ad592fcb1ec0fc601ac073e08d0d02562db2c3da418aff99" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.311656 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" path="/var/lib/kubelet/pods/ddd543be-03fc-4a61-bb0b-55a066361a5f/volumes" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.936909 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938025 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938083 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938112 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-utilities" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938124 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-utilities" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938168 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-content" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938180 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="extract-content" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938194 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerName="mariadb-database-create" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938205 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerName="mariadb-database-create" Jan 21 16:06:50 crc kubenswrapper[4902]: E0121 16:06:50.938265 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerName="mariadb-account-create-update" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938277 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerName="mariadb-account-create-update" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938612 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" containerName="mariadb-database-create" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938676 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" containerName="mariadb-account-create-update" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.938711 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd543be-03fc-4a61-bb0b-55a066361a5f" containerName="registry-server" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.939958 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.942505 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.942590 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.942519 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kpzjm" Jan 21 16:06:50 crc kubenswrapper[4902]: I0121 16:06:50.949941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.083122 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.083258 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.083298 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.184630 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.184738 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.184778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.189654 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.195737 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.201566 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"neutron-db-sync-fj4nd\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.265410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.704147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:06:51 crc kubenswrapper[4902]: W0121 16:06:51.707288 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a9d8bd_92b5_42ef_b945_6b3ccc65b48b.slice/crio-10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131 WatchSource:0}: Error finding container 10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131: Status 404 returned error can't find the container with id 10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131 Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.995738 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerStarted","Data":"f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8"} Jan 21 16:06:51 crc kubenswrapper[4902]: I0121 16:06:51.996115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerStarted","Data":"10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131"} Jan 21 16:06:52 crc kubenswrapper[4902]: I0121 16:06:52.013718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fj4nd" podStartSLOduration=2.013700508 podStartE2EDuration="2.013700508s" podCreationTimestamp="2026-01-21 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:52.011171427 +0000 UTC m=+5574.088004456" watchObservedRunningTime="2026-01-21 16:06:52.013700508 +0000 UTC m=+5574.090533537" Jan 21 16:06:53 crc kubenswrapper[4902]: I0121 16:06:53.296616 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:06:53 crc kubenswrapper[4902]: E0121 16:06:53.296920 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:06:56 crc kubenswrapper[4902]: I0121 16:06:56.031479 4902 generic.go:334] "Generic (PLEG): container finished" podID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerID="f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8" exitCode=0 Jan 21 16:06:56 crc kubenswrapper[4902]: I0121 16:06:56.031552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerDied","Data":"f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8"} Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.485670 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.601403 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") pod \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.601646 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") pod \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.601700 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") pod \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\" (UID: \"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b\") " Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.607285 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx" (OuterVolumeSpecName: "kube-api-access-rz5qx") pod "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" (UID: "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b"). InnerVolumeSpecName "kube-api-access-rz5qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.626059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" (UID: "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.626900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config" (OuterVolumeSpecName: "config") pod "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" (UID: "97a9d8bd-92b5-42ef-b945-6b3ccc65b48b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.704022 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz5qx\" (UniqueName: \"kubernetes.io/projected/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-kube-api-access-rz5qx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.704086 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:57 crc kubenswrapper[4902]: I0121 16:06:57.704108 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.057895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fj4nd" event={"ID":"97a9d8bd-92b5-42ef-b945-6b3ccc65b48b","Type":"ContainerDied","Data":"10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131"} Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.058234 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10973f4d131c614f6a2244337364a4826c042c9f800704193f82c6fb2420e131" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.057944 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fj4nd" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.323931 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:06:58 crc kubenswrapper[4902]: E0121 16:06:58.324325 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerName="neutron-db-sync" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.324339 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerName="neutron-db-sync" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.324477 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" containerName="neutron-db-sync" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.325310 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.381373 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.401946 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.404486 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.407027 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kpzjm" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.407128 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.407422 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.410429 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419733 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419778 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.419834 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.437106 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522556 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522625 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522770 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522901 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.522991 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523173 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523192 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523244 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523841 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.523839 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.524145 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.524524 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.545424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"dnsmasq-dns-578fc9f6df-sv7cs\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625237 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625370 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625398 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.625436 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.629889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.629969 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.631440 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.631728 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.654929 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"neutron-5cd8bf9fdd-mdn4r\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.664065 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:06:58 crc kubenswrapper[4902]: I0121 16:06:58.738130 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:06:59 crc kubenswrapper[4902]: I0121 16:06:59.185235 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:06:59 crc kubenswrapper[4902]: W0121 16:06:59.186979 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2187cb72_8703_4c5a_b8ae_b08461a35e1b.slice/crio-4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36 WatchSource:0}: Error finding container 4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36: Status 404 returned error can't find the container with id 4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36 Jan 21 16:06:59 crc kubenswrapper[4902]: I0121 16:06:59.517215 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:06:59 crc kubenswrapper[4902]: W0121 16:06:59.522601 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccca0d57_e560_4b6a_9e68_930df6654ae6.slice/crio-1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919 WatchSource:0}: Error finding container 1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919: Status 404 returned error can't find the container with id 1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919 Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.085544 4902 generic.go:334] "Generic (PLEG): container finished" podID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerID="d20e1e16697cfc6d2b0773a52a542bdb6a438acc78c99ef60039303f8affa50a" exitCode=0 Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.087285 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerDied","Data":"d20e1e16697cfc6d2b0773a52a542bdb6a438acc78c99ef60039303f8affa50a"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.087360 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerStarted","Data":"4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089816 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerStarted","Data":"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089858 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerStarted","Data":"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089871 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerStarted","Data":"1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919"} Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.089981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.130429 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cd8bf9fdd-mdn4r" podStartSLOduration=2.1303866559999998 podStartE2EDuration="2.130386656s" podCreationTimestamp="2026-01-21 16:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:00.125245122 +0000 UTC m=+5582.202078171" watchObservedRunningTime="2026-01-21 16:07:00.130386656 +0000 UTC m=+5582.207219706" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.343143 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66b9c9869c-btkxh"] Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.359230 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.362163 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.362326 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.363932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b9c9869c-btkxh"] Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-internal-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-ovndb-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466291 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-combined-ca-bundle\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466321 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5l4\" (UniqueName: \"kubernetes.io/projected/565a7068-4930-41e5-99bb-a08376495b63-kube-api-access-5b5l4\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466363 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-httpd-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.466431 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-public-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.569611 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.570820 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-public-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.570954 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-internal-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.570986 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-ovndb-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.571014 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-combined-ca-bundle\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.571217 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b5l4\" (UniqueName: \"kubernetes.io/projected/565a7068-4930-41e5-99bb-a08376495b63-kube-api-access-5b5l4\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.571280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-httpd-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.575701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.576532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-combined-ca-bundle\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.577199 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-internal-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.577314 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-httpd-config\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.578828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-ovndb-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.580649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/565a7068-4930-41e5-99bb-a08376495b63-public-tls-certs\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.600204 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b5l4\" (UniqueName: \"kubernetes.io/projected/565a7068-4930-41e5-99bb-a08376495b63-kube-api-access-5b5l4\") pod \"neutron-66b9c9869c-btkxh\" (UID: \"565a7068-4930-41e5-99bb-a08376495b63\") " pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:00 crc kubenswrapper[4902]: I0121 16:07:00.679920 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:01 crc kubenswrapper[4902]: I0121 16:07:01.101931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerStarted","Data":"61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda"} Jan 21 16:07:01 crc kubenswrapper[4902]: I0121 16:07:01.128104 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" podStartSLOduration=3.128086658 podStartE2EDuration="3.128086658s" podCreationTimestamp="2026-01-21 16:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:01.120443774 +0000 UTC m=+5583.197276793" watchObservedRunningTime="2026-01-21 16:07:01.128086658 +0000 UTC m=+5583.204919687" Jan 21 16:07:01 crc kubenswrapper[4902]: I0121 16:07:01.203803 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66b9c9869c-btkxh"] Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.111687 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c9869c-btkxh" event={"ID":"565a7068-4930-41e5-99bb-a08376495b63","Type":"ContainerStarted","Data":"0287633e1aa6dad7b8e21acde3d76b800797e287231d5c35f09d1ec2c866d8ff"} Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.111997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.112016 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c9869c-btkxh" event={"ID":"565a7068-4930-41e5-99bb-a08376495b63","Type":"ContainerStarted","Data":"4a840a4d15d6dedf573309427149cc5c58b3a84758a393d7b40ddaeec8719850"} Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.112026 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66b9c9869c-btkxh" event={"ID":"565a7068-4930-41e5-99bb-a08376495b63","Type":"ContainerStarted","Data":"23025e690c1cf9ea2e18a5a67272598f4f17d1cb6a734ff1518f017078dae43e"} Jan 21 16:07:02 crc kubenswrapper[4902]: I0121 16:07:02.134404 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66b9c9869c-btkxh" podStartSLOduration=2.134208498 podStartE2EDuration="2.134208498s" podCreationTimestamp="2026-01-21 16:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:02.130123773 +0000 UTC m=+5584.206956802" watchObservedRunningTime="2026-01-21 16:07:02.134208498 +0000 UTC m=+5584.211041527" Jan 21 16:07:03 crc kubenswrapper[4902]: I0121 16:07:03.120565 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.301490 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:07:08 crc kubenswrapper[4902]: E0121 16:07:08.303209 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.666293 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.748070 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:07:08 crc kubenswrapper[4902]: I0121 16:07:08.749016 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" containerID="cri-o://5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d" gracePeriod=10 Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.181801 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerID="5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d" exitCode=0 Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.181883 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerDied","Data":"5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d"} Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.303217 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.365555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.365952 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.385247 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.385359 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.385382 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") pod \"ff4fadc7-2c31-451f-9455-5112a195b36e\" (UID: \"ff4fadc7-2c31-451f-9455-5112a195b36e\") " Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.390300 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l" (OuterVolumeSpecName: "kube-api-access-ww75l") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "kube-api-access-ww75l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.417857 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.431352 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.452146 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config" (OuterVolumeSpecName: "config") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.454910 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff4fadc7-2c31-451f-9455-5112a195b36e" (UID: "ff4fadc7-2c31-451f-9455-5112a195b36e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493716 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww75l\" (UniqueName: \"kubernetes.io/projected/ff4fadc7-2c31-451f-9455-5112a195b36e-kube-api-access-ww75l\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493764 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493774 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4902]: I0121 16:07:09.493782 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4fadc7-2c31-451f-9455-5112a195b36e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.198554 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" event={"ID":"ff4fadc7-2c31-451f-9455-5112a195b36e","Type":"ContainerDied","Data":"85869cd817e0c1d272d916eb419b76f6201c88f1c3e64ad5a43adcae83c81773"} Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.198614 4902 scope.go:117] "RemoveContainer" containerID="5d178668253a4565a7e272761e78d3c8af2f6d158e8aec8d4e0682f8d430786d" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.198681 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85d446946c-gb4r2" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.245438 4902 scope.go:117] "RemoveContainer" containerID="78e0f5562314520f841b7ea0877c38f4f434c16d4495c8580ea2c10f6698660a" Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.265957 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.276223 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85d446946c-gb4r2"] Jan 21 16:07:10 crc kubenswrapper[4902]: I0121 16:07:10.306664 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" path="/var/lib/kubelet/pods/ff4fadc7-2c31-451f-9455-5112a195b36e/volumes" Jan 21 16:07:20 crc kubenswrapper[4902]: I0121 16:07:20.294758 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:07:21 crc kubenswrapper[4902]: I0121 16:07:21.306071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b"} Jan 21 16:07:24 crc kubenswrapper[4902]: I0121 16:07:24.365994 4902 scope.go:117] "RemoveContainer" containerID="2f8dc76ea47c61aa0225c738e775c625c670e1dc7f5e344791fe2553026ed3d2" Jan 21 16:07:28 crc kubenswrapper[4902]: I0121 16:07:28.756711 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.692627 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66b9c9869c-btkxh" Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.771182 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.771387 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cd8bf9fdd-mdn4r" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" containerID="cri-o://ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" gracePeriod=30 Jan 21 16:07:30 crc kubenswrapper[4902]: I0121 16:07:30.771503 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cd8bf9fdd-mdn4r" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" containerID="cri-o://fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" gracePeriod=30 Jan 21 16:07:31 crc kubenswrapper[4902]: I0121 16:07:31.633964 4902 generic.go:334] "Generic (PLEG): container finished" podID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" exitCode=0 Jan 21 16:07:31 crc kubenswrapper[4902]: I0121 16:07:31.634300 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerDied","Data":"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904"} Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.315003 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399590 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399638 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399678 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399708 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.399817 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") pod \"ccca0d57-e560-4b6a-9e68-930df6654ae6\" (UID: \"ccca0d57-e560-4b6a-9e68-930df6654ae6\") " Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.405439 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg" (OuterVolumeSpecName: "kube-api-access-kzfrg") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "kube-api-access-kzfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.405545 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.449241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config" (OuterVolumeSpecName: "config") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.455018 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.467985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ccca0d57-e560-4b6a-9e68-930df6654ae6" (UID: "ccca0d57-e560-4b6a-9e68-930df6654ae6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501152 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501187 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501199 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501207 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccca0d57-e560-4b6a-9e68-930df6654ae6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.501216 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfrg\" (UniqueName: \"kubernetes.io/projected/ccca0d57-e560-4b6a-9e68-930df6654ae6-kube-api-access-kzfrg\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.669928 4902 generic.go:334] "Generic (PLEG): container finished" podID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" exitCode=0 Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.669966 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerDied","Data":"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04"} Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.669987 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cd8bf9fdd-mdn4r" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.670037 4902 scope.go:117] "RemoveContainer" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.670025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cd8bf9fdd-mdn4r" event={"ID":"ccca0d57-e560-4b6a-9e68-930df6654ae6","Type":"ContainerDied","Data":"1b8bc07a5488c91aa60176a1b153e4219c4f47797c1b031d3b65cdd8eacf2919"} Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.698659 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.704707 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cd8bf9fdd-mdn4r"] Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.708120 4902 scope.go:117] "RemoveContainer" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.732189 4902 scope.go:117] "RemoveContainer" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" Jan 21 16:07:35 crc kubenswrapper[4902]: E0121 16:07:35.732709 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904\": container with ID starting with fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904 not found: ID does not exist" containerID="fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.732743 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904"} err="failed to get container status \"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904\": rpc error: code = NotFound desc = could not find container \"fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904\": container with ID starting with fd4ae6363c95394143fc28fa62b93df2ed4fd54f7c04ee0740e3f01069a1b904 not found: ID does not exist" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.732775 4902 scope.go:117] "RemoveContainer" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" Jan 21 16:07:35 crc kubenswrapper[4902]: E0121 16:07:35.733410 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04\": container with ID starting with ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04 not found: ID does not exist" containerID="ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04" Jan 21 16:07:35 crc kubenswrapper[4902]: I0121 16:07:35.733430 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04"} err="failed to get container status \"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04\": rpc error: code = NotFound desc = could not find container \"ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04\": container with ID starting with ec4bc0585ef4132d3593c334dc30411551ab0ebdfaf9c2fb219f354b1bdbaf04 not found: ID does not exist" Jan 21 16:07:36 crc kubenswrapper[4902]: I0121 16:07:36.307310 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" path="/var/lib/kubelet/pods/ccca0d57-e560-4b6a-9e68-930df6654ae6/volumes" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.009705 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mmsfz"] Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010545 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010558 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010586 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010592 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010602 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010608 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" Jan 21 16:07:40 crc kubenswrapper[4902]: E0121 16:07:40.010625 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="init" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="init" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010809 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4fadc7-2c31-451f-9455-5112a195b36e" containerName="dnsmasq-dns" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010822 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-httpd" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.010838 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccca0d57-e560-4b6a-9e68-930df6654ae6" containerName="neutron-api" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.011389 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.015750 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ccbtr" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.016011 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.016207 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.020397 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.027469 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.028790 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mmsfz"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.082957 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083279 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083374 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083493 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083580 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083653 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.083720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.116105 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.117432 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.154576 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.185949 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.185996 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186072 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186092 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186112 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186157 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186191 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186244 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.186290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.187978 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.188562 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.188828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.198430 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.206412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.215479 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.249626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"swift-ring-rebalance-mmsfz\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288246 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288319 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288362 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.288722 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.289236 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.289324 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.289541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.291465 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.335222 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"dnsmasq-dns-7fc6cc5b55-79wth\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.352583 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.447439 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.822578 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mmsfz"] Jan 21 16:07:40 crc kubenswrapper[4902]: I0121 16:07:40.972451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.741840 4902 generic.go:334] "Generic (PLEG): container finished" podID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerID="f7e84713417d76194209c0593e58d711f067795272f7e92b6fd9b97ab7a3b30b" exitCode=0 Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.742332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerDied","Data":"f7e84713417d76194209c0593e58d711f067795272f7e92b6fd9b97ab7a3b30b"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.744771 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerStarted","Data":"755a6256ef5a838eba2c3bd36413f341c71947696922ab6ab33e27a445898c69"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.752697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerStarted","Data":"1109cd9543d9ca0e8eb4144afbd398a9130cce515cb6f9310120753b66d88c44"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.752945 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerStarted","Data":"6e9da8422935b55b08f6368255008dcb58e01be8b6781b8ec0c502e854c13813"} Jan 21 16:07:41 crc kubenswrapper[4902]: I0121 16:07:41.788595 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mmsfz" podStartSLOduration=2.788576195 podStartE2EDuration="2.788576195s" podCreationTimestamp="2026-01-21 16:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:41.782977948 +0000 UTC m=+5623.859810987" watchObservedRunningTime="2026-01-21 16:07:41.788576195 +0000 UTC m=+5623.865409224" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.188971 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.190699 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.192822 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.202948 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224331 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224466 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224585 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.224656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326748 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326883 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.326931 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327106 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327781 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.327822 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.331904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.332022 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.332696 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.343588 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"swift-proxy-84746f8478-mdj2b\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.508174 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.779608 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerStarted","Data":"93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f"} Jan 21 16:07:42 crc kubenswrapper[4902]: I0121 16:07:42.798309 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" podStartSLOduration=2.798286736 podStartE2EDuration="2.798286736s" podCreationTimestamp="2026-01-21 16:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:42.794610513 +0000 UTC m=+5624.871443542" watchObservedRunningTime="2026-01-21 16:07:42.798286736 +0000 UTC m=+5624.875119765" Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.168892 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.787654 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerStarted","Data":"404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5"} Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.788224 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.788274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerStarted","Data":"43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10"} Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.788298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerStarted","Data":"1bb55214e5dca05bf8741ccdd7566d6ef5e813cbe0984652ffe8f4ee39b2d239"} Jan 21 16:07:43 crc kubenswrapper[4902]: I0121 16:07:43.813936 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84746f8478-mdj2b" podStartSLOduration=1.813912282 podStartE2EDuration="1.813912282s" podCreationTimestamp="2026-01-21 16:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:43.805026312 +0000 UTC m=+5625.881859361" watchObservedRunningTime="2026-01-21 16:07:43.813912282 +0000 UTC m=+5625.890745311" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.447461 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5866fbc874-ktwnr"] Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.449262 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.456281 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.456486 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.462192 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5866fbc874-ktwnr"] Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.501876 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-public-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.502166 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-run-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504191 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-log-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-etc-swift\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504387 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-config-data\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504402 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzdfn\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-kube-api-access-dzdfn\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504464 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-combined-ca-bundle\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.504525 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-internal-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-internal-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606694 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-public-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606726 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-run-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-log-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606778 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-etc-swift\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606856 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-config-data\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606874 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzdfn\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-kube-api-access-dzdfn\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.606921 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-combined-ca-bundle\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.607486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-run-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.607945 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d3194a4-20d2-47cf-8d32-37a8afa5738d-log-httpd\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.619177 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-combined-ca-bundle\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.619675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-etc-swift\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.632104 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-public-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.632431 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-config-data\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.636708 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d3194a4-20d2-47cf-8d32-37a8afa5738d-internal-tls-certs\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.646169 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzdfn\" (UniqueName: \"kubernetes.io/projected/4d3194a4-20d2-47cf-8d32-37a8afa5738d-kube-api-access-dzdfn\") pod \"swift-proxy-5866fbc874-ktwnr\" (UID: \"4d3194a4-20d2-47cf-8d32-37a8afa5738d\") " pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.801511 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.814918 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:44 crc kubenswrapper[4902]: I0121 16:07:44.815852 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.469354 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5866fbc874-ktwnr"] Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.822365 4902 generic.go:334] "Generic (PLEG): container finished" podID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerID="1109cd9543d9ca0e8eb4144afbd398a9130cce515cb6f9310120753b66d88c44" exitCode=0 Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.822436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerDied","Data":"1109cd9543d9ca0e8eb4144afbd398a9130cce515cb6f9310120753b66d88c44"} Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.824572 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5866fbc874-ktwnr" event={"ID":"4d3194a4-20d2-47cf-8d32-37a8afa5738d","Type":"ContainerStarted","Data":"9149bfcc7534562e0aae4c781ffc79d16af585f90262aa8c324e59ea87159aec"} Jan 21 16:07:45 crc kubenswrapper[4902]: I0121 16:07:45.824635 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5866fbc874-ktwnr" event={"ID":"4d3194a4-20d2-47cf-8d32-37a8afa5738d","Type":"ContainerStarted","Data":"2be58878fc3439679995a3fabdf3d9ab8ce6569b5c9b04dd00889b253bac2ece"} Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.837645 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5866fbc874-ktwnr" event={"ID":"4d3194a4-20d2-47cf-8d32-37a8afa5738d","Type":"ContainerStarted","Data":"4e9287e6759385ce2f52852e70535e9ba99af81abf6af1572e4a2c22448cad16"} Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.837997 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.838064 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:46 crc kubenswrapper[4902]: I0121 16:07:46.867215 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5866fbc874-ktwnr" podStartSLOduration=2.867194121 podStartE2EDuration="2.867194121s" podCreationTimestamp="2026-01-21 16:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:46.865471383 +0000 UTC m=+5628.942304412" watchObservedRunningTime="2026-01-21 16:07:46.867194121 +0000 UTC m=+5628.944027150" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.240383 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253033 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253259 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253329 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253445 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253598 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253706 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") pod \"4000cb23-899c-4f52-8c37-8e1c7108a21d\" (UID: \"4000cb23-899c-4f52-8c37-8e1c7108a21d\") " Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.253945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.254748 4902 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.256513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.263154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb" (OuterVolumeSpecName: "kube-api-access-rp8vb") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "kube-api-access-rp8vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.266383 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.282767 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts" (OuterVolumeSpecName: "scripts") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.290403 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.292324 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4000cb23-899c-4f52-8c37-8e1c7108a21d" (UID: "4000cb23-899c-4f52-8c37-8e1c7108a21d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356513 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4000cb23-899c-4f52-8c37-8e1c7108a21d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356546 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4000cb23-899c-4f52-8c37-8e1c7108a21d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356556 4902 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356565 4902 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356573 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp8vb\" (UniqueName: \"kubernetes.io/projected/4000cb23-899c-4f52-8c37-8e1c7108a21d-kube-api-access-rp8vb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.356582 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4000cb23-899c-4f52-8c37-8e1c7108a21d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.845729 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mmsfz" event={"ID":"4000cb23-899c-4f52-8c37-8e1c7108a21d","Type":"ContainerDied","Data":"6e9da8422935b55b08f6368255008dcb58e01be8b6781b8ec0c502e854c13813"} Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.845777 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9da8422935b55b08f6368255008dcb58e01be8b6781b8ec0c502e854c13813" Jan 21 16:07:47 crc kubenswrapper[4902]: I0121 16:07:47.845745 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mmsfz" Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.449350 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.538514 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.538767 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" containerID="cri-o://61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda" gracePeriod=10 Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.885630 4902 generic.go:334] "Generic (PLEG): container finished" podID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerID="61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda" exitCode=0 Jan 21 16:07:50 crc kubenswrapper[4902]: I0121 16:07:50.885753 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerDied","Data":"61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda"} Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.607092 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.639794 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640014 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640090 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640161 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.640203 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") pod \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\" (UID: \"2187cb72-8703-4c5a-b8ae-b08461a35e1b\") " Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.655321 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl" (OuterVolumeSpecName: "kube-api-access-kp2sl") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "kube-api-access-kp2sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.682000 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.683408 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.697408 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config" (OuterVolumeSpecName: "config") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.702815 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2187cb72-8703-4c5a-b8ae-b08461a35e1b" (UID: "2187cb72-8703-4c5a-b8ae-b08461a35e1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.742953 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp2sl\" (UniqueName: \"kubernetes.io/projected/2187cb72-8703-4c5a-b8ae-b08461a35e1b-kube-api-access-kp2sl\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743001 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743018 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743029 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.743061 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2187cb72-8703-4c5a-b8ae-b08461a35e1b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.897367 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" event={"ID":"2187cb72-8703-4c5a-b8ae-b08461a35e1b","Type":"ContainerDied","Data":"4b735d436fe292c999e308fdb1e7a5b7b7db30557572e001d066c0b1035f3a36"} Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.897412 4902 scope.go:117] "RemoveContainer" containerID="61e16963fa09505056e5b4b488dc20f78815be4f3c3bdb6f8785b3afb332dfda" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.897466 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578fc9f6df-sv7cs" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.919113 4902 scope.go:117] "RemoveContainer" containerID="d20e1e16697cfc6d2b0773a52a542bdb6a438acc78c99ef60039303f8affa50a" Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.939268 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:07:51 crc kubenswrapper[4902]: I0121 16:07:51.947664 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578fc9f6df-sv7cs"] Jan 21 16:07:52 crc kubenswrapper[4902]: I0121 16:07:52.306975 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" path="/var/lib/kubelet/pods/2187cb72-8703-4c5a-b8ae-b08461a35e1b/volumes" Jan 21 16:07:52 crc kubenswrapper[4902]: I0121 16:07:52.511816 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:52 crc kubenswrapper[4902]: I0121 16:07:52.512673 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.812649 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.814366 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5866fbc874-ktwnr" Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.930216 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.931223 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-84746f8478-mdj2b" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" containerID="cri-o://43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10" gracePeriod=30 Jan 21 16:07:54 crc kubenswrapper[4902]: I0121 16:07:54.931384 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-84746f8478-mdj2b" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" containerID="cri-o://404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5" gracePeriod=30 Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950370 4902 generic.go:334] "Generic (PLEG): container finished" podID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerID="404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5" exitCode=0 Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950648 4902 generic.go:334] "Generic (PLEG): container finished" podID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerID="43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10" exitCode=0 Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950419 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerDied","Data":"404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5"} Jan 21 16:07:55 crc kubenswrapper[4902]: I0121 16:07:55.950686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerDied","Data":"43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10"} Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.119587 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228014 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228451 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228631 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.228889 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.229088 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") pod \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\" (UID: \"de4224d4-f2fc-49c1-99cb-a5be69aa192a\") " Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.231073 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.231216 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.235783 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.236247 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd" (OuterVolumeSpecName: "kube-api-access-67jhd") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "kube-api-access-67jhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.273422 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.281813 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data" (OuterVolumeSpecName: "config-data") pod "de4224d4-f2fc-49c1-99cb-a5be69aa192a" (UID: "de4224d4-f2fc-49c1-99cb-a5be69aa192a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331201 4902 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331235 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331247 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331257 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de4224d4-f2fc-49c1-99cb-a5be69aa192a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331272 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67jhd\" (UniqueName: \"kubernetes.io/projected/de4224d4-f2fc-49c1-99cb-a5be69aa192a-kube-api-access-67jhd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.331282 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de4224d4-f2fc-49c1-99cb-a5be69aa192a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.961743 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84746f8478-mdj2b" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.962987 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84746f8478-mdj2b" event={"ID":"de4224d4-f2fc-49c1-99cb-a5be69aa192a","Type":"ContainerDied","Data":"1bb55214e5dca05bf8741ccdd7566d6ef5e813cbe0984652ffe8f4ee39b2d239"} Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.963098 4902 scope.go:117] "RemoveContainer" containerID="404c4a3c21381e7cc61f99e3986fd4e1183c5fbe8f0d1f4a4f670f8a4ba3edf5" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.983746 4902 scope.go:117] "RemoveContainer" containerID="43e71d5e398cf8f40ce3eb06bd5ddd543ee0ab417f623428bd4d97f168c68a10" Jan 21 16:07:56 crc kubenswrapper[4902]: I0121 16:07:56.993249 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:57 crc kubenswrapper[4902]: I0121 16:07:57.000156 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-84746f8478-mdj2b"] Jan 21 16:07:58 crc kubenswrapper[4902]: I0121 16:07:58.309735 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" path="/var/lib/kubelet/pods/de4224d4-f2fc-49c1-99cb-a5be69aa192a/volumes" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.967194 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.968911 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.968948 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.968978 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerName="swift-ring-rebalance" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.968986 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerName="swift-ring-rebalance" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.969008 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.969017 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.969119 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="init" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.969130 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="init" Jan 21 16:08:00 crc kubenswrapper[4902]: E0121 16:08:00.969158 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.969165 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970366 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-httpd" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970386 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4000cb23-899c-4f52-8c37-8e1c7108a21d" containerName="swift-ring-rebalance" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970409 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4224d4-f2fc-49c1-99cb-a5be69aa192a" containerName="proxy-server" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.970424 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2187cb72-8703-4c5a-b8ae-b08461a35e1b" containerName="dnsmasq-dns" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.971189 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:00 crc kubenswrapper[4902]: I0121 16:08:00.983903 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.064084 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.065679 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.068294 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.071821 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.128212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.128275 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229374 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229790 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.229899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.230476 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.249607 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"cinder-db-create-nh5zs\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.300602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.332288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.332433 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.333746 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.361606 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"cinder-5eaa-account-create-update-6b2pj\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.382378 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:01 crc kubenswrapper[4902]: W0121 16:08:01.843894 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod316e80e8_1286_4be7_b686_90693f8e7c95.slice/crio-b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca WatchSource:0}: Error finding container b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca: Status 404 returned error can't find the container with id b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.847372 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:08:01 crc kubenswrapper[4902]: I0121 16:08:01.924766 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:08:01 crc kubenswrapper[4902]: W0121 16:08:01.927741 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8d97084_2d8b_44c2_877e_b09211b7d84d.slice/crio-4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279 WatchSource:0}: Error finding container 4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279: Status 404 returned error can't find the container with id 4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279 Jan 21 16:08:02 crc kubenswrapper[4902]: I0121 16:08:02.020230 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5eaa-account-create-update-6b2pj" event={"ID":"d8d97084-2d8b-44c2-877e-b09211b7d84d","Type":"ContainerStarted","Data":"4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279"} Jan 21 16:08:02 crc kubenswrapper[4902]: I0121 16:08:02.023482 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh5zs" event={"ID":"316e80e8-1286-4be7-b686-90693f8e7c95","Type":"ContainerStarted","Data":"b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca"} Jan 21 16:08:02 crc kubenswrapper[4902]: I0121 16:08:02.045022 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-nh5zs" podStartSLOduration=2.045000066 podStartE2EDuration="2.045000066s" podCreationTimestamp="2026-01-21 16:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:02.037970869 +0000 UTC m=+5644.114803898" watchObservedRunningTime="2026-01-21 16:08:02.045000066 +0000 UTC m=+5644.121833095" Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.033067 4902 generic.go:334] "Generic (PLEG): container finished" podID="316e80e8-1286-4be7-b686-90693f8e7c95" containerID="08d576dd917c4a5813c6d9db476bd6fcba6691cafc01f2c3b9a02a013671f644" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.033112 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh5zs" event={"ID":"316e80e8-1286-4be7-b686-90693f8e7c95","Type":"ContainerDied","Data":"08d576dd917c4a5813c6d9db476bd6fcba6691cafc01f2c3b9a02a013671f644"} Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.035815 4902 generic.go:334] "Generic (PLEG): container finished" podID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerID="f9ff394d565c17472cbe0972635a74048f6673c7d9a12c90517226508f39624b" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4902]: I0121 16:08:03.035906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5eaa-account-create-update-6b2pj" event={"ID":"d8d97084-2d8b-44c2-877e-b09211b7d84d","Type":"ContainerDied","Data":"f9ff394d565c17472cbe0972635a74048f6673c7d9a12c90517226508f39624b"} Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.512626 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.521604 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.703874 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") pod \"316e80e8-1286-4be7-b686-90693f8e7c95\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.704154 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") pod \"d8d97084-2d8b-44c2-877e-b09211b7d84d\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.704186 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") pod \"316e80e8-1286-4be7-b686-90693f8e7c95\" (UID: \"316e80e8-1286-4be7-b686-90693f8e7c95\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.704849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "316e80e8-1286-4be7-b686-90693f8e7c95" (UID: "316e80e8-1286-4be7-b686-90693f8e7c95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") pod \"d8d97084-2d8b-44c2-877e-b09211b7d84d\" (UID: \"d8d97084-2d8b-44c2-877e-b09211b7d84d\") " Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705461 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8d97084-2d8b-44c2-877e-b09211b7d84d" (UID: "d8d97084-2d8b-44c2-877e-b09211b7d84d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705686 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8d97084-2d8b-44c2-877e-b09211b7d84d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.705707 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/316e80e8-1286-4be7-b686-90693f8e7c95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.709406 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h" (OuterVolumeSpecName: "kube-api-access-mh28h") pod "d8d97084-2d8b-44c2-877e-b09211b7d84d" (UID: "d8d97084-2d8b-44c2-877e-b09211b7d84d"). InnerVolumeSpecName "kube-api-access-mh28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.710250 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj" (OuterVolumeSpecName: "kube-api-access-hqfkj") pod "316e80e8-1286-4be7-b686-90693f8e7c95" (UID: "316e80e8-1286-4be7-b686-90693f8e7c95"). InnerVolumeSpecName "kube-api-access-hqfkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.807694 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh28h\" (UniqueName: \"kubernetes.io/projected/d8d97084-2d8b-44c2-877e-b09211b7d84d-kube-api-access-mh28h\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4902]: I0121 16:08:04.808080 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfkj\" (UniqueName: \"kubernetes.io/projected/316e80e8-1286-4be7-b686-90693f8e7c95-kube-api-access-hqfkj\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.052655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh5zs" event={"ID":"316e80e8-1286-4be7-b686-90693f8e7c95","Type":"ContainerDied","Data":"b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca"} Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.052698 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9431160a0f22affe0b0b83a370705ec6edb2cf1c742104612f5012b2c35c1ca" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.052678 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh5zs" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.056131 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5eaa-account-create-update-6b2pj" event={"ID":"d8d97084-2d8b-44c2-877e-b09211b7d84d","Type":"ContainerDied","Data":"4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279"} Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.056169 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5eaa-account-create-update-6b2pj" Jan 21 16:08:05 crc kubenswrapper[4902]: I0121 16:08:05.056177 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b272761778b72e881a56869b3b7806f9e06dd2fe05fe2e1e12fa23cbd234279" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.338496 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:08:06 crc kubenswrapper[4902]: E0121 16:08:06.338910 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" containerName="mariadb-database-create" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.338927 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" containerName="mariadb-database-create" Jan 21 16:08:06 crc kubenswrapper[4902]: E0121 16:08:06.338945 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerName="mariadb-account-create-update" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.338954 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerName="mariadb-account-create-update" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.339193 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" containerName="mariadb-account-create-update" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.339220 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" containerName="mariadb-database-create" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.340093 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.342117 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.345387 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.346084 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v844l" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.352723 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.434958 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435280 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435331 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435766 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.435827 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537594 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537707 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537744 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537769 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.537804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.543976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.544093 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.547031 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.547425 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.563594 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"cinder-db-sync-k7rr4\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:06 crc kubenswrapper[4902]: I0121 16:08:06.657639 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:07 crc kubenswrapper[4902]: I0121 16:08:07.126884 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:08:08 crc kubenswrapper[4902]: I0121 16:08:08.081675 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerStarted","Data":"73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45"} Jan 21 16:08:08 crc kubenswrapper[4902]: I0121 16:08:08.082007 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerStarted","Data":"1fd66fcb7429c5e27c4e572b6ce058e6f6500bca307d127355c5aefd7796184b"} Jan 21 16:08:08 crc kubenswrapper[4902]: I0121 16:08:08.104453 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-k7rr4" podStartSLOduration=2.104427761 podStartE2EDuration="2.104427761s" podCreationTimestamp="2026-01-21 16:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:08.095585473 +0000 UTC m=+5650.172418512" watchObservedRunningTime="2026-01-21 16:08:08.104427761 +0000 UTC m=+5650.181260790" Jan 21 16:08:11 crc kubenswrapper[4902]: I0121 16:08:11.111123 4902 generic.go:334] "Generic (PLEG): container finished" podID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerID="73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45" exitCode=0 Jan 21 16:08:11 crc kubenswrapper[4902]: I0121 16:08:11.111225 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerDied","Data":"73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45"} Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.461803 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651586 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651950 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.651970 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.652066 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.652124 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") pod \"610eddf1-f5de-40bb-8946-2092c4edfa9c\" (UID: \"610eddf1-f5de-40bb-8946-2092c4edfa9c\") " Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.652417 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.662296 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.662348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts" (OuterVolumeSpecName: "scripts") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.662545 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm" (OuterVolumeSpecName: "kube-api-access-8ldfm") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "kube-api-access-8ldfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.675804 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.697361 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data" (OuterVolumeSpecName: "config-data") pod "610eddf1-f5de-40bb-8946-2092c4edfa9c" (UID: "610eddf1-f5de-40bb-8946-2092c4edfa9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754365 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/610eddf1-f5de-40bb-8946-2092c4edfa9c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754398 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754407 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754416 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754425 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/610eddf1-f5de-40bb-8946-2092c4edfa9c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:12 crc kubenswrapper[4902]: I0121 16:08:12.754434 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ldfm\" (UniqueName: \"kubernetes.io/projected/610eddf1-f5de-40bb-8946-2092c4edfa9c-kube-api-access-8ldfm\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.130237 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7rr4" event={"ID":"610eddf1-f5de-40bb-8946-2092c4edfa9c","Type":"ContainerDied","Data":"1fd66fcb7429c5e27c4e572b6ce058e6f6500bca307d127355c5aefd7796184b"} Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.130280 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd66fcb7429c5e27c4e572b6ce058e6f6500bca307d127355c5aefd7796184b" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.130333 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7rr4" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.471995 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:08:13 crc kubenswrapper[4902]: E0121 16:08:13.472365 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerName="cinder-db-sync" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.472376 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerName="cinder-db-sync" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.472533 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" containerName="cinder-db-sync" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.475307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.485954 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.568614 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.568926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.569308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.569412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.569663 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.646489 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.648613 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.652850 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v844l" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.653192 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.653433 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.653685 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.667267 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671065 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671250 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671296 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.671322 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.672500 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.672545 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.673190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.679271 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.701288 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"dnsmasq-dns-69884d7f9-kfzgg\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772441 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772522 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772553 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772634 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772664 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.772758 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.793925 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874096 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874140 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874181 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874253 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874424 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.874803 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.875232 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.878435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.879652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.883154 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.888415 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.895514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"cinder-api-0\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " pod="openstack/cinder-api-0" Jan 21 16:08:13 crc kubenswrapper[4902]: I0121 16:08:13.967407 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:14 crc kubenswrapper[4902]: I0121 16:08:14.317005 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:14 crc kubenswrapper[4902]: I0121 16:08:14.328321 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.160615 4902 generic.go:334] "Generic (PLEG): container finished" podID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerID="5cb68b975e1bdae1829713fed46eef25b840bf53e0813c38525f5a6f921ca76c" exitCode=0 Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.160800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerDied","Data":"5cb68b975e1bdae1829713fed46eef25b840bf53e0813c38525f5a6f921ca76c"} Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.161193 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerStarted","Data":"cce4d19f30fd69fa08a849c8261f82a05dc1b4c6705764be924dca9e7b74f41e"} Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.164477 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerStarted","Data":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} Jan 21 16:08:15 crc kubenswrapper[4902]: I0121 16:08:15.164521 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerStarted","Data":"3f0d9ffbf203c9cff62aa57b5dff59c75bba53abd9b322c0b3db48b5a5865b5a"} Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.148811 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.175550 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerStarted","Data":"951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049"} Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.175664 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.177854 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerStarted","Data":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.178230 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.199322 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" podStartSLOduration=3.199306466 podStartE2EDuration="3.199306466s" podCreationTimestamp="2026-01-21 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:16.194404248 +0000 UTC m=+5658.271237277" watchObservedRunningTime="2026-01-21 16:08:16.199306466 +0000 UTC m=+5658.276139495" Jan 21 16:08:16 crc kubenswrapper[4902]: I0121 16:08:16.219031 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.21901043 podStartE2EDuration="3.21901043s" podCreationTimestamp="2026-01-21 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:16.211807167 +0000 UTC m=+5658.288640196" watchObservedRunningTime="2026-01-21 16:08:16.21901043 +0000 UTC m=+5658.295843459" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.186209 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" containerID="cri-o://c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" gracePeriod=30 Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.187159 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" containerID="cri-o://86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" gracePeriod=30 Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.754086 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856103 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856440 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856479 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856530 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs" (OuterVolumeSpecName: "logs") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856546 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856719 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856777 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") pod \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\" (UID: \"492a4cd7-f76e-408e-9f3e-6cb25b40248b\") " Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.856837 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.857542 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492a4cd7-f76e-408e-9f3e-6cb25b40248b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.857564 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492a4cd7-f76e-408e-9f3e-6cb25b40248b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.862668 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r" (OuterVolumeSpecName: "kube-api-access-cjx6r") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "kube-api-access-cjx6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.869174 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.869205 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts" (OuterVolumeSpecName: "scripts") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.882426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.916759 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data" (OuterVolumeSpecName: "config-data") pod "492a4cd7-f76e-408e-9f3e-6cb25b40248b" (UID: "492a4cd7-f76e-408e-9f3e-6cb25b40248b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958871 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjx6r\" (UniqueName: \"kubernetes.io/projected/492a4cd7-f76e-408e-9f3e-6cb25b40248b-kube-api-access-cjx6r\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958905 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958917 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958925 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:17 crc kubenswrapper[4902]: I0121 16:08:17.958933 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492a4cd7-f76e-408e-9f3e-6cb25b40248b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.195997 4902 generic.go:334] "Generic (PLEG): container finished" podID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" exitCode=0 Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196033 4902 generic.go:334] "Generic (PLEG): container finished" podID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" exitCode=143 Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerDied","Data":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196102 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerDied","Data":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196114 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196127 4902 scope.go:117] "RemoveContainer" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.196115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492a4cd7-f76e-408e-9f3e-6cb25b40248b","Type":"ContainerDied","Data":"3f0d9ffbf203c9cff62aa57b5dff59c75bba53abd9b322c0b3db48b5a5865b5a"} Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.227552 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.227623 4902 scope.go:117] "RemoveContainer" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.238102 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.250775 4902 scope.go:117] "RemoveContainer" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.251271 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": container with ID starting with c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023 not found: ID does not exist" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251315 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} err="failed to get container status \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": rpc error: code = NotFound desc = could not find container \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": container with ID starting with c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023 not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251351 4902 scope.go:117] "RemoveContainer" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.251789 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": container with ID starting with 86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc not found: ID does not exist" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251819 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} err="failed to get container status \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": rpc error: code = NotFound desc = could not find container \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": container with ID starting with 86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.251846 4902 scope.go:117] "RemoveContainer" containerID="c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.252116 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023"} err="failed to get container status \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": rpc error: code = NotFound desc = could not find container \"c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023\": container with ID starting with c8473424ae56d6c9f7cb4b4d0864363dd6c66c3537c2db0caf8103ff8a8ab023 not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.252137 4902 scope.go:117] "RemoveContainer" containerID="86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.252382 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc"} err="failed to get container status \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": rpc error: code = NotFound desc = could not find container \"86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc\": container with ID starting with 86e4a0c5cac18cd025379fe0fa9017025c4f4332face6bf2ea998203cfe471fc not found: ID does not exist" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263194 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.263532 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263548 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" Jan 21 16:08:18 crc kubenswrapper[4902]: E0121 16:08:18.263586 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263592 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263740 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.263761 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" containerName="cinder-api-log" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.264629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.267183 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v844l" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.267769 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268035 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268232 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268386 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.268551 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.281437 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.311664 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492a4cd7-f76e-408e-9f3e-6cb25b40248b" path="/var/lib/kubelet/pods/492a4cd7-f76e-408e-9f3e-6cb25b40248b/volumes" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365668 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365795 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365826 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365871 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365892 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.365988 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.366072 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.366170 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468242 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468337 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468402 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468422 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.468450 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.469009 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.469426 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.472242 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.474395 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.475007 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.475074 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.475747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.476493 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.489961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"cinder-api-0\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " pod="openstack/cinder-api-0" Jan 21 16:08:18 crc kubenswrapper[4902]: I0121 16:08:18.581570 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:19 crc kubenswrapper[4902]: I0121 16:08:19.087924 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:19 crc kubenswrapper[4902]: W0121 16:08:19.093826 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f00f389_2d9c_443a_bf45_f1ddb6cea29c.slice/crio-99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619 WatchSource:0}: Error finding container 99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619: Status 404 returned error can't find the container with id 99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619 Jan 21 16:08:19 crc kubenswrapper[4902]: I0121 16:08:19.207484 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerStarted","Data":"99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619"} Jan 21 16:08:20 crc kubenswrapper[4902]: I0121 16:08:20.222145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerStarted","Data":"c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b"} Jan 21 16:08:21 crc kubenswrapper[4902]: I0121 16:08:21.235589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerStarted","Data":"528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a"} Jan 21 16:08:21 crc kubenswrapper[4902]: I0121 16:08:21.236085 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:08:21 crc kubenswrapper[4902]: I0121 16:08:21.269023 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.268993902 podStartE2EDuration="3.268993902s" podCreationTimestamp="2026-01-21 16:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:21.259085593 +0000 UTC m=+5663.335918652" watchObservedRunningTime="2026-01-21 16:08:21.268993902 +0000 UTC m=+5663.345826951" Jan 21 16:08:23 crc kubenswrapper[4902]: I0121 16:08:23.797386 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:08:23 crc kubenswrapper[4902]: I0121 16:08:23.894940 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:08:23 crc kubenswrapper[4902]: I0121 16:08:23.895199 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" containerID="cri-o://93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f" gracePeriod=10 Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.287207 4902 generic.go:334] "Generic (PLEG): container finished" podID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerID="93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f" exitCode=0 Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.287528 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerDied","Data":"93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f"} Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.538349 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.694983 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.695023 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") pod \"111bf0bc-8088-42d9-bf09-396b7d087ae8\" (UID: \"111bf0bc-8088-42d9-bf09-396b7d087ae8\") " Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.706156 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q" (OuterVolumeSpecName: "kube-api-access-plr2q") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "kube-api-access-plr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.744352 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.749029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.759612 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.772759 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config" (OuterVolumeSpecName: "config") pod "111bf0bc-8088-42d9-bf09-396b7d087ae8" (UID: "111bf0bc-8088-42d9-bf09-396b7d087ae8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796556 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796589 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plr2q\" (UniqueName: \"kubernetes.io/projected/111bf0bc-8088-42d9-bf09-396b7d087ae8-kube-api-access-plr2q\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796603 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796612 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4902]: I0121 16:08:24.796621 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111bf0bc-8088-42d9-bf09-396b7d087ae8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.296808 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" event={"ID":"111bf0bc-8088-42d9-bf09-396b7d087ae8","Type":"ContainerDied","Data":"755a6256ef5a838eba2c3bd36413f341c71947696922ab6ab33e27a445898c69"} Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.296859 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6cc5b55-79wth" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.296870 4902 scope.go:117] "RemoveContainer" containerID="93731fa0d5c4e506f4bd5fe81d39edaa6cbe9eb7e4608fe707e85adf923a410f" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.317956 4902 scope.go:117] "RemoveContainer" containerID="f7e84713417d76194209c0593e58d711f067795272f7e92b6fd9b97ab7a3b30b" Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.338991 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:08:25 crc kubenswrapper[4902]: I0121 16:08:25.346938 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fc6cc5b55-79wth"] Jan 21 16:08:26 crc kubenswrapper[4902]: I0121 16:08:26.308232 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" path="/var/lib/kubelet/pods/111bf0bc-8088-42d9-bf09-396b7d087ae8/volumes" Jan 21 16:08:30 crc kubenswrapper[4902]: I0121 16:08:30.387839 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.669986 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:47 crc kubenswrapper[4902]: E0121 16:08:47.670998 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="init" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.671013 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="init" Jan 21 16:08:47 crc kubenswrapper[4902]: E0121 16:08:47.671034 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.671061 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.671266 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="111bf0bc-8088-42d9-bf09-396b7d087ae8" containerName="dnsmasq-dns" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.672396 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.675683 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.693858 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822497 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822572 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.822659 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.823182 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.823328 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.925354 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.925739 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.925898 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926215 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926469 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926638 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.926648 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.933937 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.934552 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.938509 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.938947 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.944895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"cinder-scheduler-0\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " pod="openstack/cinder-scheduler-0" Jan 21 16:08:47 crc kubenswrapper[4902]: I0121 16:08:47.993116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.451771 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.498620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerStarted","Data":"ce33e9839d5cefb904a6f325f236da539af4c95c84835eb4cf854486a05a8ed2"} Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.881795 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.882124 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" containerID="cri-o://c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b" gracePeriod=30 Jan 21 16:08:48 crc kubenswrapper[4902]: I0121 16:08:48.882585 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" containerID="cri-o://528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a" gracePeriod=30 Jan 21 16:08:49 crc kubenswrapper[4902]: I0121 16:08:49.508796 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerStarted","Data":"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e"} Jan 21 16:08:49 crc kubenswrapper[4902]: I0121 16:08:49.511367 4902 generic.go:334] "Generic (PLEG): container finished" podID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerID="c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b" exitCode=143 Jan 21 16:08:49 crc kubenswrapper[4902]: I0121 16:08:49.511393 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerDied","Data":"c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b"} Jan 21 16:08:50 crc kubenswrapper[4902]: I0121 16:08:50.521202 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerStarted","Data":"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed"} Jan 21 16:08:50 crc kubenswrapper[4902]: I0121 16:08:50.542467 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.5424485900000002 podStartE2EDuration="3.54244859s" podCreationTimestamp="2026-01-21 16:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:50.536628077 +0000 UTC m=+5692.613461106" watchObservedRunningTime="2026-01-21 16:08:50.54244859 +0000 UTC m=+5692.619281619" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.543736 4902 generic.go:334] "Generic (PLEG): container finished" podID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerID="528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a" exitCode=0 Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.543814 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerDied","Data":"528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a"} Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.630639 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796815 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796890 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.796937 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797477 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs" (OuterVolumeSpecName: "logs") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") pod \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\" (UID: \"5f00f389-2d9c-443a-bf45-f1ddb6cea29c\") " Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.797861 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.798283 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.798299 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.802985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf" (OuterVolumeSpecName: "kube-api-access-z4vqf") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "kube-api-access-z4vqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.803265 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts" (OuterVolumeSpecName: "scripts") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.815290 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.831390 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.857239 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.857671 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data" (OuterVolumeSpecName: "config-data") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.858800 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f00f389-2d9c-443a-bf45-f1ddb6cea29c" (UID: "5f00f389-2d9c-443a-bf45-f1ddb6cea29c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899555 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899595 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899606 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899616 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4vqf\" (UniqueName: \"kubernetes.io/projected/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-kube-api-access-z4vqf\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899625 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899632 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.899639 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f00f389-2d9c-443a-bf45-f1ddb6cea29c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:52 crc kubenswrapper[4902]: I0121 16:08:52.993451 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.556708 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5f00f389-2d9c-443a-bf45-f1ddb6cea29c","Type":"ContainerDied","Data":"99bcc40edef719f6b32b7ecb95b8438fb7f37784a97bb90751ee8f29eb800619"} Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.556795 4902 scope.go:117] "RemoveContainer" containerID="528739ef35dc7389ee33f73742ca6fe14e22e7e82f89d0a372308467295c621a" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.556802 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.603779 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.604649 4902 scope.go:117] "RemoveContainer" containerID="c57e300c081d7900319e10d0ff1145e3f13d8d1a05398598894df17dd6db366b" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.622545 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.638803 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: E0121 16:08:53.639231 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639256 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" Jan 21 16:08:53 crc kubenswrapper[4902]: E0121 16:08:53.639279 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639289 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639523 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.639563 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" containerName="cinder-api-log" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.640680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.644617 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.645016 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.645236 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.653932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721451 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data-custom\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721486 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d9842a-4646-47c5-a81c-18e641f7617f-logs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721518 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-scripts\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24d9842a-4646-47c5-a81c-18e641f7617f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721561 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721648 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.721690 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6k4\" (UniqueName: \"kubernetes.io/projected/24d9842a-4646-47c5-a81c-18e641f7617f-kube-api-access-np6k4\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823208 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823297 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data-custom\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823348 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d9842a-4646-47c5-a81c-18e641f7617f-logs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823387 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-scripts\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823414 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24d9842a-4646-47c5-a81c-18e641f7617f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823445 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24d9842a-4646-47c5-a81c-18e641f7617f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823502 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np6k4\" (UniqueName: \"kubernetes.io/projected/24d9842a-4646-47c5-a81c-18e641f7617f-kube-api-access-np6k4\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.823991 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d9842a-4646-47c5-a81c-18e641f7617f-logs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828280 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data-custom\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828637 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.828663 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.830262 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-config-data\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.844952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d9842a-4646-47c5-a81c-18e641f7617f-scripts\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.851935 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np6k4\" (UniqueName: \"kubernetes.io/projected/24d9842a-4646-47c5-a81c-18e641f7617f-kube-api-access-np6k4\") pod \"cinder-api-0\" (UID: \"24d9842a-4646-47c5-a81c-18e641f7617f\") " pod="openstack/cinder-api-0" Jan 21 16:08:53 crc kubenswrapper[4902]: I0121 16:08:53.970542 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:08:54 crc kubenswrapper[4902]: I0121 16:08:54.265399 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:08:54 crc kubenswrapper[4902]: W0121 16:08:54.268285 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d9842a_4646_47c5_a81c_18e641f7617f.slice/crio-4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71 WatchSource:0}: Error finding container 4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71: Status 404 returned error can't find the container with id 4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71 Jan 21 16:08:54 crc kubenswrapper[4902]: I0121 16:08:54.305637 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f00f389-2d9c-443a-bf45-f1ddb6cea29c" path="/var/lib/kubelet/pods/5f00f389-2d9c-443a-bf45-f1ddb6cea29c/volumes" Jan 21 16:08:54 crc kubenswrapper[4902]: I0121 16:08:54.567489 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24d9842a-4646-47c5-a81c-18e641f7617f","Type":"ContainerStarted","Data":"4337578762bf90751771f3dad5c9463114af7c4ebb57c09775cf4b9177b9af71"} Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.578137 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24d9842a-4646-47c5-a81c-18e641f7617f","Type":"ContainerStarted","Data":"5b34d4ea275de8ef636bbd3abf5ab38448026e43d127208c66d9478d2070b5b4"} Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.578424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"24d9842a-4646-47c5-a81c-18e641f7617f","Type":"ContainerStarted","Data":"2e02282ffcfd24598a8f07268602eacd34c31aaac7f5dadcaa89f6a4b3058400"} Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.578445 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:08:55 crc kubenswrapper[4902]: I0121 16:08:55.601231 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.601211908 podStartE2EDuration="2.601211908s" podCreationTimestamp="2026-01-21 16:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:55.600326443 +0000 UTC m=+5697.677159472" watchObservedRunningTime="2026-01-21 16:08:55.601211908 +0000 UTC m=+5697.678044937" Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.202634 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.261935 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.602447 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" containerID="cri-o://4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" gracePeriod=30 Jan 21 16:08:58 crc kubenswrapper[4902]: I0121 16:08:58.603313 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" containerID="cri-o://c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" gracePeriod=30 Jan 21 16:08:59 crc kubenswrapper[4902]: I0121 16:08:59.614486 4902 generic.go:334] "Generic (PLEG): container finished" podID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" exitCode=0 Jan 21 16:08:59 crc kubenswrapper[4902]: I0121 16:08:59.614840 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerDied","Data":"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed"} Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.164552 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266395 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266449 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266695 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") pod \"4621cb0e-ad03-4a82-89a0-a14392def1e7\" (UID: \"4621cb0e-ad03-4a82-89a0-a14392def1e7\") " Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.266985 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.267357 4902 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4621cb0e-ad03-4a82-89a0-a14392def1e7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.272844 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts" (OuterVolumeSpecName: "scripts") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.274855 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.275065 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7" (OuterVolumeSpecName: "kube-api-access-2gqz7") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "kube-api-access-2gqz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.345306 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371032 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371096 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371114 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.371129 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gqz7\" (UniqueName: \"kubernetes.io/projected/4621cb0e-ad03-4a82-89a0-a14392def1e7-kube-api-access-2gqz7\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.374230 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data" (OuterVolumeSpecName: "config-data") pod "4621cb0e-ad03-4a82-89a0-a14392def1e7" (UID: "4621cb0e-ad03-4a82-89a0-a14392def1e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.473528 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4621cb0e-ad03-4a82-89a0-a14392def1e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.629957 4902 generic.go:334] "Generic (PLEG): container finished" podID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" exitCode=0 Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerDied","Data":"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e"} Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630055 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4621cb0e-ad03-4a82-89a0-a14392def1e7","Type":"ContainerDied","Data":"ce33e9839d5cefb904a6f325f236da539af4c95c84835eb4cf854486a05a8ed2"} Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630077 4902 scope.go:117] "RemoveContainer" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.630201 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.675747 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.710127 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713198 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.713637 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713654 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.713670 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713679 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713836 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="probe" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.713853 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" containerName="cinder-scheduler" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.714723 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.717452 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.725388 4902 scope.go:117] "RemoveContainer" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.733566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.772455 4902 scope.go:117] "RemoveContainer" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.773013 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed\": container with ID starting with c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed not found: ID does not exist" containerID="c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.773109 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed"} err="failed to get container status \"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed\": rpc error: code = NotFound desc = could not find container \"c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed\": container with ID starting with c791432512036452aae318e43f7f460892d078ae6751c1341a58b5b2d62e80ed not found: ID does not exist" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.773150 4902 scope.go:117] "RemoveContainer" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" Jan 21 16:09:00 crc kubenswrapper[4902]: E0121 16:09:00.774137 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e\": container with ID starting with 4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e not found: ID does not exist" containerID="4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.774173 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e"} err="failed to get container status \"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e\": rpc error: code = NotFound desc = could not find container \"4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e\": container with ID starting with 4112a6d0636f3b41c7e2ab301d130ce9a0b496e959fdff506cb9357041ecf63e not found: ID does not exist" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790759 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-scripts\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790848 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsfq\" (UniqueName: \"kubernetes.io/projected/16354b62-7b74-468c-8953-3a41b1dc1a66-kube-api-access-jpsfq\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790923 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.790939 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.791000 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16354b62-7b74-468c-8953-3a41b1dc1a66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892598 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892670 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892725 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16354b62-7b74-468c-8953-3a41b1dc1a66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892855 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-scripts\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.892899 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpsfq\" (UniqueName: \"kubernetes.io/projected/16354b62-7b74-468c-8953-3a41b1dc1a66-kube-api-access-jpsfq\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.893335 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16354b62-7b74-468c-8953-3a41b1dc1a66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.898717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-scripts\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.899249 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.899363 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.899495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354b62-7b74-468c-8953-3a41b1dc1a66-config-data\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:00 crc kubenswrapper[4902]: I0121 16:09:00.908873 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpsfq\" (UniqueName: \"kubernetes.io/projected/16354b62-7b74-468c-8953-3a41b1dc1a66-kube-api-access-jpsfq\") pod \"cinder-scheduler-0\" (UID: \"16354b62-7b74-468c-8953-3a41b1dc1a66\") " pod="openstack/cinder-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4902]: I0121 16:09:01.040303 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4902]: I0121 16:09:01.457581 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:09:01 crc kubenswrapper[4902]: W0121 16:09:01.461960 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16354b62_7b74_468c_8953_3a41b1dc1a66.slice/crio-197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228 WatchSource:0}: Error finding container 197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228: Status 404 returned error can't find the container with id 197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228 Jan 21 16:09:01 crc kubenswrapper[4902]: I0121 16:09:01.640792 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16354b62-7b74-468c-8953-3a41b1dc1a66","Type":"ContainerStarted","Data":"197dd8604d46e846f0b78367c2171681ce0ac61cbf9e2325c5e50dbb33d3b228"} Jan 21 16:09:02 crc kubenswrapper[4902]: I0121 16:09:02.313837 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4621cb0e-ad03-4a82-89a0-a14392def1e7" path="/var/lib/kubelet/pods/4621cb0e-ad03-4a82-89a0-a14392def1e7/volumes" Jan 21 16:09:02 crc kubenswrapper[4902]: I0121 16:09:02.656134 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16354b62-7b74-468c-8953-3a41b1dc1a66","Type":"ContainerStarted","Data":"7eaa2ec2402e0110ed8bb33b22d716d250c245e0dca827e09a9df521af3ee8c3"} Jan 21 16:09:03 crc kubenswrapper[4902]: I0121 16:09:03.666634 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"16354b62-7b74-468c-8953-3a41b1dc1a66","Type":"ContainerStarted","Data":"a258ff3809e0aff102cc97afbd32d3d0546a647f066bf1dadb98911386074ac9"} Jan 21 16:09:03 crc kubenswrapper[4902]: I0121 16:09:03.694863 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.694840748 podStartE2EDuration="3.694840748s" podCreationTimestamp="2026-01-21 16:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:03.685581557 +0000 UTC m=+5705.762414596" watchObservedRunningTime="2026-01-21 16:09:03.694840748 +0000 UTC m=+5705.771673797" Jan 21 16:09:06 crc kubenswrapper[4902]: I0121 16:09:06.040676 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:09:06 crc kubenswrapper[4902]: I0121 16:09:06.041256 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:09:11 crc kubenswrapper[4902]: I0121 16:09:11.238104 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.115815 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.117270 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.127506 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.143555 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.143643 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.229735 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.230952 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.233395 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.240925 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244695 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244771 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.244938 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.246237 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.268658 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"glance-db-create-47gxx\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.349575 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.349766 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.350434 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.368526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"glance-acb5-account-create-update-v87vq\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.439577 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.548783 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:14 crc kubenswrapper[4902]: I0121 16:09:14.920933 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:09:14 crc kubenswrapper[4902]: W0121 16:09:14.928833 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5062b64_8c2a_46ee_ab92_3eb4d6e3fe95.slice/crio-564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d WatchSource:0}: Error finding container 564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d: Status 404 returned error can't find the container with id 564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.040985 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:09:15 crc kubenswrapper[4902]: W0121 16:09:15.056130 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91fe5022_2b6f_46b9_9275_c8a809b32808.slice/crio-73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03 WatchSource:0}: Error finding container 73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03: Status 404 returned error can't find the container with id 73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03 Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.778853 4902 generic.go:334] "Generic (PLEG): container finished" podID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerID="896306bd2b1df34ec4addf4110626bc7531717802d050ed131267e70790b5a08" exitCode=0 Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.778929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-47gxx" event={"ID":"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95","Type":"ContainerDied","Data":"896306bd2b1df34ec4addf4110626bc7531717802d050ed131267e70790b5a08"} Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.779372 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-47gxx" event={"ID":"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95","Type":"ContainerStarted","Data":"564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d"} Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.781983 4902 generic.go:334] "Generic (PLEG): container finished" podID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerID="fa806723dfd7c0c4b6154749911e6912458d2480fc0fa40932f24e709061ffad" exitCode=0 Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.782059 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acb5-account-create-update-v87vq" event={"ID":"91fe5022-2b6f-46b9-9275-c8a809b32808","Type":"ContainerDied","Data":"fa806723dfd7c0c4b6154749911e6912458d2480fc0fa40932f24e709061ffad"} Jan 21 16:09:15 crc kubenswrapper[4902]: I0121 16:09:15.782094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acb5-account-create-update-v87vq" event={"ID":"91fe5022-2b6f-46b9-9275-c8a809b32808","Type":"ContainerStarted","Data":"73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03"} Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.283708 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.289678 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.301769 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") pod \"91fe5022-2b6f-46b9-9275-c8a809b32808\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.302025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") pod \"91fe5022-2b6f-46b9-9275-c8a809b32808\" (UID: \"91fe5022-2b6f-46b9-9275-c8a809b32808\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.302592 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91fe5022-2b6f-46b9-9275-c8a809b32808" (UID: "91fe5022-2b6f-46b9-9275-c8a809b32808"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.310034 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn" (OuterVolumeSpecName: "kube-api-access-nsfzn") pod "91fe5022-2b6f-46b9-9275-c8a809b32808" (UID: "91fe5022-2b6f-46b9-9275-c8a809b32808"). InnerVolumeSpecName "kube-api-access-nsfzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.403768 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") pod \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.403818 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") pod \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\" (UID: \"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95\") " Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.404202 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91fe5022-2b6f-46b9-9275-c8a809b32808-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.404222 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsfzn\" (UniqueName: \"kubernetes.io/projected/91fe5022-2b6f-46b9-9275-c8a809b32808-kube-api-access-nsfzn\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.404409 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" (UID: "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.406319 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq" (OuterVolumeSpecName: "kube-api-access-z8cdq") pod "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" (UID: "f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95"). InnerVolumeSpecName "kube-api-access-z8cdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.505605 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8cdq\" (UniqueName: \"kubernetes.io/projected/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-kube-api-access-z8cdq\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.505639 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.815867 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-47gxx" event={"ID":"f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95","Type":"ContainerDied","Data":"564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d"} Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.816363 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564325e9036ee60992baa8a7e61c26741d72e356091de1d9184d1fc8bf8a0e9d" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.815892 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-47gxx" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.817724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-acb5-account-create-update-v87vq" event={"ID":"91fe5022-2b6f-46b9-9275-c8a809b32808","Type":"ContainerDied","Data":"73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03"} Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.817760 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73998dc13065700af19fb284652ddcc2ebf55bd058a5a03ed578177fad335b03" Jan 21 16:09:17 crc kubenswrapper[4902]: I0121 16:09:17.817826 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-acb5-account-create-update-v87vq" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.403211 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:09:19 crc kubenswrapper[4902]: E0121 16:09:19.403557 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerName="mariadb-database-create" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.403568 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerName="mariadb-database-create" Jan 21 16:09:19 crc kubenswrapper[4902]: E0121 16:09:19.403584 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerName="mariadb-account-create-update" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.403590 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerName="mariadb-account-create-update" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.405418 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" containerName="mariadb-database-create" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.405468 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" containerName="mariadb-account-create-update" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.407563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.409294 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mn7jp" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.409527 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.418032 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.436860 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.436912 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.436995 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.437106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539556 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539669 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539728 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.539751 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.545872 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.546812 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.549531 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.558787 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"glance-db-sync-8xw4q\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:19 crc kubenswrapper[4902]: I0121 16:09:19.724005 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.224353 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.859655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerStarted","Data":"203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2"} Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.859996 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerStarted","Data":"8979e9039ffe0d36c13a9987e7d17cd697d78bcf90b7adfceedd998dffa5e223"} Jan 21 16:09:20 crc kubenswrapper[4902]: I0121 16:09:20.882125 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8xw4q" podStartSLOduration=1.8821052950000001 podStartE2EDuration="1.882105295s" podCreationTimestamp="2026-01-21 16:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:20.872594888 +0000 UTC m=+5722.949427927" watchObservedRunningTime="2026-01-21 16:09:20.882105295 +0000 UTC m=+5722.958938324" Jan 21 16:09:23 crc kubenswrapper[4902]: E0121 16:09:23.890469 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba2b6d5_88af_4c5d_93dd_21ed05fe3ba3.slice/crio-conmon-203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:23 crc kubenswrapper[4902]: I0121 16:09:23.890992 4902 generic.go:334] "Generic (PLEG): container finished" podID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerID="203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2" exitCode=0 Jan 21 16:09:23 crc kubenswrapper[4902]: I0121 16:09:23.891017 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerDied","Data":"203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2"} Jan 21 16:09:24 crc kubenswrapper[4902]: I0121 16:09:24.545280 4902 scope.go:117] "RemoveContainer" containerID="0b56fe28c730faebb9b858e50e97ecef1625af2c756c8684ae0d499694f95667" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.344677 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466179 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466265 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466375 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.466424 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") pod \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\" (UID: \"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3\") " Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.472557 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d" (OuterVolumeSpecName: "kube-api-access-pgs4d") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "kube-api-access-pgs4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.478280 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.491618 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.543335 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data" (OuterVolumeSpecName: "config-data") pod "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" (UID: "8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568201 4902 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568238 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgs4d\" (UniqueName: \"kubernetes.io/projected/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-kube-api-access-pgs4d\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568255 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.568267 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.909890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8xw4q" event={"ID":"8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3","Type":"ContainerDied","Data":"8979e9039ffe0d36c13a9987e7d17cd697d78bcf90b7adfceedd998dffa5e223"} Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.909937 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8979e9039ffe0d36c13a9987e7d17cd697d78bcf90b7adfceedd998dffa5e223" Jan 21 16:09:25 crc kubenswrapper[4902]: I0121 16:09:25.909993 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8xw4q" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.183586 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: E0121 16:09:26.183930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerName="glance-db-sync" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.183946 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerName="glance-db-sync" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.184136 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" containerName="glance-db-sync" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.184934 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.190999 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.192745 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.192761 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mn7jp" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.199836 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.285554 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.287309 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.312886 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.378578 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.380863 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383411 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383785 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383811 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383852 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383921 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.383952 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384126 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384164 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384179 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384240 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384349 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384408 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384453 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.384490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.398336 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486053 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486112 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486136 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486283 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486327 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486437 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486531 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486611 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486686 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486774 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486839 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.486877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.487140 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.487931 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488282 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488538 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488719 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.488814 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.490775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.492883 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.494197 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.494720 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.497276 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.503605 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.504204 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.506829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.509231 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"dnsmasq-dns-6bdc9ddbfc-5j79v\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.511717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"glance-default-internal-api-0\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.606288 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:26 crc kubenswrapper[4902]: I0121 16:09:26.700513 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:26.802808 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:27.356460 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:27.820314 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:09:27 crc kubenswrapper[4902]: I0121 16:09:27.944620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerStarted","Data":"00e6bf2e91ac88d92bb8b5aa081d62fc62c975ee1e0acebbcdf006224895188a"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.003607 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.119266 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:28 crc kubenswrapper[4902]: W0121 16:09:28.128837 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89e7fd5c_9bb9_4f38_98d2_9cbfb20480d7.slice/crio-1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25 WatchSource:0}: Error finding container 1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25: Status 404 returned error can't find the container with id 1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25 Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.335070 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.975947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerStarted","Data":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.976317 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerStarted","Data":"e143ea804eb357383be65a4f9299d35f13390bc0106b3c7a48e5ed3ee751c488"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.980011 4902 generic.go:334] "Generic (PLEG): container finished" podID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" exitCode=0 Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.980091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerDied","Data":"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.986644 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerStarted","Data":"2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1"} Jan 21 16:09:28 crc kubenswrapper[4902]: I0121 16:09:28.986679 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerStarted","Data":"1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25"} Jan 21 16:09:29 crc kubenswrapper[4902]: I0121 16:09:29.994791 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerStarted","Data":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} Jan 21 16:09:29 crc kubenswrapper[4902]: I0121 16:09:29.994933 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" containerID="cri-o://dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" gracePeriod=30 Jan 21 16:09:29 crc kubenswrapper[4902]: I0121 16:09:29.994976 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" containerID="cri-o://ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" gracePeriod=30 Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.000638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerStarted","Data":"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7"} Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.000993 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.003832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerStarted","Data":"e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42"} Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.003976 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" containerID="cri-o://2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1" gracePeriod=30 Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.004112 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" containerID="cri-o://e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42" gracePeriod=30 Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.036868 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.036847101 podStartE2EDuration="4.036847101s" podCreationTimestamp="2026-01-21 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:30.013963958 +0000 UTC m=+5732.090796997" watchObservedRunningTime="2026-01-21 16:09:30.036847101 +0000 UTC m=+5732.113680130" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.037198 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.037192921 podStartE2EDuration="4.037192921s" podCreationTimestamp="2026-01-21 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:30.033414555 +0000 UTC m=+5732.110247604" watchObservedRunningTime="2026-01-21 16:09:30.037192921 +0000 UTC m=+5732.114025950" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.052113 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" podStartSLOduration=4.05209098 podStartE2EDuration="4.05209098s" podCreationTimestamp="2026-01-21 16:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:30.050607178 +0000 UTC m=+5732.127440207" watchObservedRunningTime="2026-01-21 16:09:30.05209098 +0000 UTC m=+5732.128924009" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.748015 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.888787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889163 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889199 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889259 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889320 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") pod \"d895a439-2fd1-43e5-ae5b-37c1b855a857\" (UID: \"d895a439-2fd1-43e5-ae5b-37c1b855a857\") " Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889579 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.889816 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs" (OuterVolumeSpecName: "logs") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.890560 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.890586 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d895a439-2fd1-43e5-ae5b-37c1b855a857-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.893981 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts" (OuterVolumeSpecName: "scripts") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.894205 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8" (OuterVolumeSpecName: "kube-api-access-v8tn8") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "kube-api-access-v8tn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.916663 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.938533 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data" (OuterVolumeSpecName: "config-data") pod "d895a439-2fd1-43e5-ae5b-37c1b855a857" (UID: "d895a439-2fd1-43e5-ae5b-37c1b855a857"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991802 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991836 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991847 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d895a439-2fd1-43e5-ae5b-37c1b855a857-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:30 crc kubenswrapper[4902]: I0121 16:09:30.991859 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8tn8\" (UniqueName: \"kubernetes.io/projected/d895a439-2fd1-43e5-ae5b-37c1b855a857-kube-api-access-v8tn8\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.017123 4902 generic.go:334] "Generic (PLEG): container finished" podID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" exitCode=0 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.017166 4902 generic.go:334] "Generic (PLEG): container finished" podID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" exitCode=143 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.017279 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020457 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerDied","Data":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020511 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerDied","Data":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020530 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d895a439-2fd1-43e5-ae5b-37c1b855a857","Type":"ContainerDied","Data":"e143ea804eb357383be65a4f9299d35f13390bc0106b3c7a48e5ed3ee751c488"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.020551 4902 scope.go:117] "RemoveContainer" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024432 4902 generic.go:334] "Generic (PLEG): container finished" podID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerID="e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42" exitCode=0 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024461 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerDied","Data":"e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerDied","Data":"2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1"} Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.024469 4902 generic.go:334] "Generic (PLEG): container finished" podID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerID="2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1" exitCode=143 Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.072523 4902 scope.go:117] "RemoveContainer" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.075217 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.090110 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.092540 4902 scope.go:117] "RemoveContainer" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.093446 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": container with ID starting with ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1 not found: ID does not exist" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093488 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} err="failed to get container status \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": rpc error: code = NotFound desc = could not find container \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": container with ID starting with ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093514 4902 scope.go:117] "RemoveContainer" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.093755 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": container with ID starting with dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28 not found: ID does not exist" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093788 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} err="failed to get container status \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": rpc error: code = NotFound desc = could not find container \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": container with ID starting with dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.093807 4902 scope.go:117] "RemoveContainer" containerID="ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.094074 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1"} err="failed to get container status \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": rpc error: code = NotFound desc = could not find container \"ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1\": container with ID starting with ad6cf0fec31713dab55f3d6e76dba18af105259691298a3fbf0e37c53a9503d1 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.094106 4902 scope.go:117] "RemoveContainer" containerID="dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.094288 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28"} err="failed to get container status \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": rpc error: code = NotFound desc = could not find container \"dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28\": container with ID starting with dda37803457c442d035ca9fefb55f5073a87f3bafbd0f076c2f6b2c679b3fd28 not found: ID does not exist" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103131 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.103650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103670 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" Jan 21 16:09:31 crc kubenswrapper[4902]: E0121 16:09:31.103696 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103705 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103932 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-log" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.103960 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" containerName="glance-httpd" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.107351 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.109415 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.109566 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.119116 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195097 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195178 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195235 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195269 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195314 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195343 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.195384 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296151 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296201 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296229 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296256 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296287 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296383 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.296849 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.297328 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.301393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.301906 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.316990 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.317696 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.320217 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.409999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.479202 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499856 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499898 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499933 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.499960 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") pod \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\" (UID: \"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7\") " Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.500466 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.500621 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs" (OuterVolumeSpecName: "logs") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.506386 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5" (OuterVolumeSpecName: "kube-api-access-z89k5") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "kube-api-access-z89k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.530377 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts" (OuterVolumeSpecName: "scripts") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.584228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602442 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z89k5\" (UniqueName: \"kubernetes.io/projected/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-kube-api-access-z89k5\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602472 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602484 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602493 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.602500 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.652196 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data" (OuterVolumeSpecName: "config-data") pod "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" (UID: "89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:31 crc kubenswrapper[4902]: I0121 16:09:31.707564 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.035197 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7","Type":"ContainerDied","Data":"1d32a28fbd7e96a1a583185b8a4ff7c76c3be27c2af3b3ef55f445d5eb9b0c25"} Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.035247 4902 scope.go:117] "RemoveContainer" containerID="e2bfe2033a458018f9859bed675794dcb04f5f1714eb354b1bacd9ba8be6fe42" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.035499 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.063882 4902 scope.go:117] "RemoveContainer" containerID="2391defccba80f13530a209fd1975b89ba247557a50efabc95271e6deef454d1" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.068736 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.075791 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.104392 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: E0121 16:09:32.107440 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107478 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" Jan 21 16:09:32 crc kubenswrapper[4902]: E0121 16:09:32.107522 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107535 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107877 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-log" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.107896 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" containerName="glance-httpd" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.109793 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.113073 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.113223 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.116861 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.202586 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:09:32 crc kubenswrapper[4902]: W0121 16:09:32.207325 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a90211b_865e_43ee_a4d2_4435d5377cac.slice/crio-9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb WatchSource:0}: Error finding container 9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb: Status 404 returned error can't find the container with id 9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.219359 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221275 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221443 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221629 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221745 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.221830 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.307875 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7" path="/var/lib/kubelet/pods/89e7fd5c-9bb9-4f38-98d2-9cbfb20480d7/volumes" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.308631 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d895a439-2fd1-43e5-ae5b-37c1b855a857" path="/var/lib/kubelet/pods/d895a439-2fd1-43e5-ae5b-37c1b855a857/volumes" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.323167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.323933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324459 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324561 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324604 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.324682 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.325125 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.327424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.328439 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.329895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.338340 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.341948 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"glance-default-external-api-0\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " pod="openstack/glance-default-external-api-0" Jan 21 16:09:32 crc kubenswrapper[4902]: I0121 16:09:32.468266 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:09:33 crc kubenswrapper[4902]: I0121 16:09:33.051233 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerStarted","Data":"b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c"} Jan 21 16:09:33 crc kubenswrapper[4902]: I0121 16:09:33.051752 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerStarted","Data":"9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb"} Jan 21 16:09:33 crc kubenswrapper[4902]: I0121 16:09:33.067521 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:09:33 crc kubenswrapper[4902]: W0121 16:09:33.074774 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621700c2_adff_4cf1_81a4_fb0213e5e919.slice/crio-fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349 WatchSource:0}: Error finding container fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349: Status 404 returned error can't find the container with id fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349 Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.062923 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerStarted","Data":"d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3"} Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.063277 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerStarted","Data":"fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349"} Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.067475 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerStarted","Data":"59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207"} Jan 21 16:09:34 crc kubenswrapper[4902]: I0121 16:09:34.091272 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.091253739 podStartE2EDuration="3.091253739s" podCreationTimestamp="2026-01-21 16:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:34.085968021 +0000 UTC m=+5736.162801060" watchObservedRunningTime="2026-01-21 16:09:34.091253739 +0000 UTC m=+5736.168086768" Jan 21 16:09:35 crc kubenswrapper[4902]: I0121 16:09:35.078433 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerStarted","Data":"736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3"} Jan 21 16:09:35 crc kubenswrapper[4902]: I0121 16:09:35.113525 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.113504672 podStartE2EDuration="3.113504672s" podCreationTimestamp="2026-01-21 16:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:35.108737058 +0000 UTC m=+5737.185570097" watchObservedRunningTime="2026-01-21 16:09:35.113504672 +0000 UTC m=+5737.190337701" Jan 21 16:09:36 crc kubenswrapper[4902]: I0121 16:09:36.611322 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:09:36 crc kubenswrapper[4902]: I0121 16:09:36.679731 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:09:36 crc kubenswrapper[4902]: I0121 16:09:36.679980 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" containerID="cri-o://951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049" gracePeriod=10 Jan 21 16:09:37 crc kubenswrapper[4902]: I0121 16:09:37.103310 4902 generic.go:334] "Generic (PLEG): container finished" podID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerID="951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049" exitCode=0 Jan 21 16:09:37 crc kubenswrapper[4902]: I0121 16:09:37.103383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerDied","Data":"951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049"} Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.419838 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.551905 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552034 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552092 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552148 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.552202 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") pod \"5a487ade-04df-42df-b2a4-694f02a2ebdb\" (UID: \"5a487ade-04df-42df-b2a4-694f02a2ebdb\") " Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.562019 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2" (OuterVolumeSpecName: "kube-api-access-j2vq2") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "kube-api-access-j2vq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.601725 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.602348 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.603694 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config" (OuterVolumeSpecName: "config") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.605152 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a487ade-04df-42df-b2a4-694f02a2ebdb" (UID: "5a487ade-04df-42df-b2a4-694f02a2ebdb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654404 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654445 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654462 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2vq2\" (UniqueName: \"kubernetes.io/projected/5a487ade-04df-42df-b2a4-694f02a2ebdb-kube-api-access-j2vq2\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654474 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:38 crc kubenswrapper[4902]: I0121 16:09:38.654486 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a487ade-04df-42df-b2a4-694f02a2ebdb-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.126549 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" event={"ID":"5a487ade-04df-42df-b2a4-694f02a2ebdb","Type":"ContainerDied","Data":"cce4d19f30fd69fa08a849c8261f82a05dc1b4c6705764be924dca9e7b74f41e"} Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.126618 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69884d7f9-kfzgg" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.126652 4902 scope.go:117] "RemoveContainer" containerID="951c4e5c6873eb9e83588429fe8aca2e5cbae26eba168613139a62833929e049" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.157885 4902 scope.go:117] "RemoveContainer" containerID="5cb68b975e1bdae1829713fed46eef25b840bf53e0813c38525f5a6f921ca76c" Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.173003 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:09:39 crc kubenswrapper[4902]: I0121 16:09:39.181447 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69884d7f9-kfzgg"] Jan 21 16:09:40 crc kubenswrapper[4902]: I0121 16:09:40.310176 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" path="/var/lib/kubelet/pods/5a487ade-04df-42df-b2a4-694f02a2ebdb/volumes" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.480327 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.480811 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.507782 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:41 crc kubenswrapper[4902]: I0121 16:09:41.517671 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.154280 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.154325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.468875 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.468924 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.498937 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:09:42 crc kubenswrapper[4902]: I0121 16:09:42.534898 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:09:43 crc kubenswrapper[4902]: I0121 16:09:43.166937 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:09:43 crc kubenswrapper[4902]: I0121 16:09:43.167261 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:09:44 crc kubenswrapper[4902]: I0121 16:09:44.074873 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:44 crc kubenswrapper[4902]: I0121 16:09:44.140326 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:09:45 crc kubenswrapper[4902]: I0121 16:09:45.102311 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:09:45 crc kubenswrapper[4902]: I0121 16:09:45.181040 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:09:45 crc kubenswrapper[4902]: I0121 16:09:45.242366 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:09:47 crc kubenswrapper[4902]: I0121 16:09:47.769552 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:09:47 crc kubenswrapper[4902]: I0121 16:09:47.770617 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.294236 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:09:53 crc kubenswrapper[4902]: E0121 16:09:53.295378 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.295400 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" Jan 21 16:09:53 crc kubenswrapper[4902]: E0121 16:09:53.295426 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="init" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.295440 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="init" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.295775 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a487ade-04df-42df-b2a4-694f02a2ebdb" containerName="dnsmasq-dns" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.296717 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.298472 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.301636 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.302964 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.312700 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.320241 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.320337 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.323179 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422066 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422117 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422278 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.422342 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.423586 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.441305 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"placement-70cb-account-create-update-hprl8\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.524006 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.524082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.524831 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.544364 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"placement-db-create-xrlb5\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.620700 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:53 crc kubenswrapper[4902]: I0121 16:09:53.626741 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:54 crc kubenswrapper[4902]: I0121 16:09:54.203485 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:09:54 crc kubenswrapper[4902]: I0121 16:09:54.264468 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70cb-account-create-update-hprl8" event={"ID":"311b51a9-7349-42c3-8777-e1da9c997866","Type":"ContainerStarted","Data":"173c6c71b1c511d1ce8e014ff16b6925a0603aac50b3a26bc4726fac330fcd1d"} Jan 21 16:09:54 crc kubenswrapper[4902]: I0121 16:09:54.270111 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:09:54 crc kubenswrapper[4902]: E0121 16:09:54.689374 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32dabaa5_86fa_4ff4_9a8e_7cd5360c978c.slice/crio-aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.278388 4902 generic.go:334] "Generic (PLEG): container finished" podID="311b51a9-7349-42c3-8777-e1da9c997866" containerID="bfee1fd2715dd8d05c9392fd3ab86d1d97c355292e968dc34fcc4d66a846b5d3" exitCode=0 Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.278499 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70cb-account-create-update-hprl8" event={"ID":"311b51a9-7349-42c3-8777-e1da9c997866","Type":"ContainerDied","Data":"bfee1fd2715dd8d05c9392fd3ab86d1d97c355292e968dc34fcc4d66a846b5d3"} Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.281763 4902 generic.go:334] "Generic (PLEG): container finished" podID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerID="aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85" exitCode=0 Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.281831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xrlb5" event={"ID":"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c","Type":"ContainerDied","Data":"aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85"} Jan 21 16:09:55 crc kubenswrapper[4902]: I0121 16:09:55.281869 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xrlb5" event={"ID":"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c","Type":"ContainerStarted","Data":"fdda20c0b81f02a878c1630192d767aabc81eb313c73c05a0e37e860b871bdc4"} Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.712915 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.722277 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.840879 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") pod \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.840965 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") pod \"311b51a9-7349-42c3-8777-e1da9c997866\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841037 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") pod \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\" (UID: \"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841254 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") pod \"311b51a9-7349-42c3-8777-e1da9c997866\" (UID: \"311b51a9-7349-42c3-8777-e1da9c997866\") " Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841739 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" (UID: "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.841766 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "311b51a9-7349-42c3-8777-e1da9c997866" (UID: "311b51a9-7349-42c3-8777-e1da9c997866"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.846272 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk" (OuterVolumeSpecName: "kube-api-access-fcbfk") pod "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" (UID: "32dabaa5-86fa-4ff4-9a8e-7cd5360c978c"). InnerVolumeSpecName "kube-api-access-fcbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.853341 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94" (OuterVolumeSpecName: "kube-api-access-7mb94") pod "311b51a9-7349-42c3-8777-e1da9c997866" (UID: "311b51a9-7349-42c3-8777-e1da9c997866"). InnerVolumeSpecName "kube-api-access-7mb94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943452 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mb94\" (UniqueName: \"kubernetes.io/projected/311b51a9-7349-42c3-8777-e1da9c997866-kube-api-access-7mb94\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943489 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943501 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/311b51a9-7349-42c3-8777-e1da9c997866-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:56 crc kubenswrapper[4902]: I0121 16:09:56.943510 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcbfk\" (UniqueName: \"kubernetes.io/projected/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c-kube-api-access-fcbfk\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.302697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-70cb-account-create-update-hprl8" event={"ID":"311b51a9-7349-42c3-8777-e1da9c997866","Type":"ContainerDied","Data":"173c6c71b1c511d1ce8e014ff16b6925a0603aac50b3a26bc4726fac330fcd1d"} Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.302746 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173c6c71b1c511d1ce8e014ff16b6925a0603aac50b3a26bc4726fac330fcd1d" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.302714 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-70cb-account-create-update-hprl8" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.305116 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xrlb5" event={"ID":"32dabaa5-86fa-4ff4-9a8e-7cd5360c978c","Type":"ContainerDied","Data":"fdda20c0b81f02a878c1630192d767aabc81eb313c73c05a0e37e860b871bdc4"} Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.305164 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdda20c0b81f02a878c1630192d767aabc81eb313c73c05a0e37e860b871bdc4" Jan 21 16:09:57 crc kubenswrapper[4902]: I0121 16:09:57.305237 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xrlb5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.689143 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:09:58 crc kubenswrapper[4902]: E0121 16:09:58.689771 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311b51a9-7349-42c3-8777-e1da9c997866" containerName="mariadb-account-create-update" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.689782 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="311b51a9-7349-42c3-8777-e1da9c997866" containerName="mariadb-account-create-update" Jan 21 16:09:58 crc kubenswrapper[4902]: E0121 16:09:58.689793 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerName="mariadb-database-create" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.689799 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerName="mariadb-database-create" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.690145 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" containerName="mariadb-database-create" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.690165 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="311b51a9-7349-42c3-8777-e1da9c997866" containerName="mariadb-account-create-update" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.690735 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.692562 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.692636 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.697138 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7mj69" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.730613 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.730945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.731004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.731208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.731303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.735017 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.780850 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.783248 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.790567 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834323 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834690 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834741 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834783 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834818 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.834860 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835010 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835100 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835195 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.835305 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.836352 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.840988 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.841292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.842456 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.859054 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"placement-db-sync-nnsm5\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937457 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937530 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.937663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938638 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938649 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938782 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.938917 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:58 crc kubenswrapper[4902]: I0121 16:09:58.957791 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"dnsmasq-dns-ddb658677-chfv4\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.020317 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.106919 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.457518 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:09:59 crc kubenswrapper[4902]: W0121 16:09:59.465359 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf532d2b6_7ad3_4b83_9100_d4b94d5a512d.slice/crio-fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2 WatchSource:0}: Error finding container fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2: Status 404 returned error can't find the container with id fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2 Jan 21 16:09:59 crc kubenswrapper[4902]: I0121 16:09:59.583873 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.345195 4902 generic.go:334] "Generic (PLEG): container finished" podID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" exitCode=0 Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.345379 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerDied","Data":"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.345543 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerStarted","Data":"cdda0da4083f2f0e9099ddfcab1f5b7e57fb3c8539f90cf5b020d3761c23f6b0"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.350951 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerStarted","Data":"03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.351008 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerStarted","Data":"fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2"} Jan 21 16:10:00 crc kubenswrapper[4902]: I0121 16:10:00.450381 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-nnsm5" podStartSLOduration=2.450362603 podStartE2EDuration="2.450362603s" podCreationTimestamp="2026-01-21 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:00.445126896 +0000 UTC m=+5762.521959925" watchObservedRunningTime="2026-01-21 16:10:00.450362603 +0000 UTC m=+5762.527195632" Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.362713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerStarted","Data":"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7"} Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.363033 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.365438 4902 generic.go:334] "Generic (PLEG): container finished" podID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerID="03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7" exitCode=0 Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.365487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerDied","Data":"03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7"} Jan 21 16:10:01 crc kubenswrapper[4902]: I0121 16:10:01.387813 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ddb658677-chfv4" podStartSLOduration=3.387790122 podStartE2EDuration="3.387790122s" podCreationTimestamp="2026-01-21 16:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:01.383393648 +0000 UTC m=+5763.460226677" watchObservedRunningTime="2026-01-21 16:10:01.387790122 +0000 UTC m=+5763.464623151" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.723847 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915814 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.915945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.916075 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") pod \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\" (UID: \"f532d2b6-7ad3-4b83-9100-d4b94d5a512d\") " Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.916623 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs" (OuterVolumeSpecName: "logs") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.923473 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts" (OuterVolumeSpecName: "scripts") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.926216 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm" (OuterVolumeSpecName: "kube-api-access-kbjkm") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "kube-api-access-kbjkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.942221 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4902]: I0121 16:10:02.942691 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data" (OuterVolumeSpecName: "config-data") pod "f532d2b6-7ad3-4b83-9100-d4b94d5a512d" (UID: "f532d2b6-7ad3-4b83-9100-d4b94d5a512d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017935 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjkm\" (UniqueName: \"kubernetes.io/projected/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-kube-api-access-kbjkm\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017973 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017983 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.017993 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.018001 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f532d2b6-7ad3-4b83-9100-d4b94d5a512d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.384611 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-nnsm5" event={"ID":"f532d2b6-7ad3-4b83-9100-d4b94d5a512d","Type":"ContainerDied","Data":"fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2"} Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.384910 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe24954b37cc941c89ad2d9dcf99daa5617245da439afaa7d0f72c86750a77a2" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.385023 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-nnsm5" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.864759 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-856775b9dd-twjxc"] Jan 21 16:10:03 crc kubenswrapper[4902]: E0121 16:10:03.865403 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerName="placement-db-sync" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.865426 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerName="placement-db-sync" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.865730 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" containerName="placement-db-sync" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.867035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870490 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870527 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870579 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.870977 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7mj69" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.871136 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 16:10:03 crc kubenswrapper[4902]: I0121 16:10:03.880540 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-856775b9dd-twjxc"] Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034428 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-logs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwz4\" (UniqueName: \"kubernetes.io/projected/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-kube-api-access-xtwz4\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-internal-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034540 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-public-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.034576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-combined-ca-bundle\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.035242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-scripts\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.035306 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-config-data\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.136661 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwz4\" (UniqueName: \"kubernetes.io/projected/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-kube-api-access-xtwz4\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.136720 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-internal-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.136746 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-public-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137314 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-combined-ca-bundle\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137408 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-scripts\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137459 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-config-data\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137565 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-logs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.137986 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-logs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.140623 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-public-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.140633 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-internal-tls-certs\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.140984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-config-data\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.142291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-scripts\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.147739 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-combined-ca-bundle\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.160571 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwz4\" (UniqueName: \"kubernetes.io/projected/43a8c70b-ebc7-4ce0-8d5c-e790226eff45-kube-api-access-xtwz4\") pod \"placement-856775b9dd-twjxc\" (UID: \"43a8c70b-ebc7-4ce0-8d5c-e790226eff45\") " pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.203167 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:04 crc kubenswrapper[4902]: I0121 16:10:04.629484 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-856775b9dd-twjxc"] Jan 21 16:10:04 crc kubenswrapper[4902]: W0121 16:10:04.636915 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a8c70b_ebc7_4ce0_8d5c_e790226eff45.slice/crio-a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675 WatchSource:0}: Error finding container a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675: Status 404 returned error can't find the container with id a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675 Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856775b9dd-twjxc" event={"ID":"43a8c70b-ebc7-4ce0-8d5c-e790226eff45","Type":"ContainerStarted","Data":"67794b6a2b1e5568faa321d86fa25828738739aa68ab11d7c7db8061fb2e5729"} Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405707 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856775b9dd-twjxc" event={"ID":"43a8c70b-ebc7-4ce0-8d5c-e790226eff45","Type":"ContainerStarted","Data":"abf7fda4412b62127e45777e86533d8cca3f8c8b810b11811bd51c3975da6d2c"} Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405722 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405732 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856775b9dd-twjxc" event={"ID":"43a8c70b-ebc7-4ce0-8d5c-e790226eff45","Type":"ContainerStarted","Data":"a363c3df22484b95080493226ef6de548bce7b37077a62ea634a957ad031d675"} Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.405746 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:05 crc kubenswrapper[4902]: I0121 16:10:05.429748 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-856775b9dd-twjxc" podStartSLOduration=2.42972895 podStartE2EDuration="2.42972895s" podCreationTimestamp="2026-01-21 16:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:05.423411193 +0000 UTC m=+5767.500244252" watchObservedRunningTime="2026-01-21 16:10:05.42972895 +0000 UTC m=+5767.506561969" Jan 21 16:10:09 crc kubenswrapper[4902]: I0121 16:10:09.109340 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:10:09 crc kubenswrapper[4902]: I0121 16:10:09.204115 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:10:09 crc kubenswrapper[4902]: I0121 16:10:09.204386 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" containerID="cri-o://e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" gracePeriod=10 Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.200898 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351666 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351724 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351778 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.351822 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.353643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") pod \"f87b7e66-2e90-42f0-babb-fc5013fa6077\" (UID: \"f87b7e66-2e90-42f0-babb-fc5013fa6077\") " Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.374259 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4" (OuterVolumeSpecName: "kube-api-access-s2cj4") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "kube-api-access-s2cj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.396727 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config" (OuterVolumeSpecName: "config") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.397621 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.412184 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.414667 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f87b7e66-2e90-42f0-babb-fc5013fa6077" (UID: "f87b7e66-2e90-42f0-babb-fc5013fa6077"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.457779 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458196 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458321 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cj4\" (UniqueName: \"kubernetes.io/projected/f87b7e66-2e90-42f0-babb-fc5013fa6077-kube-api-access-s2cj4\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458416 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.458543 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f87b7e66-2e90-42f0-babb-fc5013fa6077-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460159 4902 generic.go:334] "Generic (PLEG): container finished" podID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" exitCode=0 Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460266 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerDied","Data":"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7"} Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" event={"ID":"f87b7e66-2e90-42f0-babb-fc5013fa6077","Type":"ContainerDied","Data":"00e6bf2e91ac88d92bb8b5aa081d62fc62c975ee1e0acebbcdf006224895188a"} Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460418 4902 scope.go:117] "RemoveContainer" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.460577 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bdc9ddbfc-5j79v" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.535429 4902 scope.go:117] "RemoveContainer" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.537835 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.546807 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bdc9ddbfc-5j79v"] Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564092 4902 scope.go:117] "RemoveContainer" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" Jan 21 16:10:10 crc kubenswrapper[4902]: E0121 16:10:10.564640 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7\": container with ID starting with e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7 not found: ID does not exist" containerID="e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564677 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7"} err="failed to get container status \"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7\": rpc error: code = NotFound desc = could not find container \"e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7\": container with ID starting with e995f55c1cd1b4d2f0e5fe94aa001608f020f2b779fc66c3e4759675ea52e4e7 not found: ID does not exist" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564705 4902 scope.go:117] "RemoveContainer" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" Jan 21 16:10:10 crc kubenswrapper[4902]: E0121 16:10:10.564969 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545\": container with ID starting with 9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545 not found: ID does not exist" containerID="9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545" Jan 21 16:10:10 crc kubenswrapper[4902]: I0121 16:10:10.564993 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545"} err="failed to get container status \"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545\": rpc error: code = NotFound desc = could not find container \"9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545\": container with ID starting with 9b90b5e9976ff404939d97898e5d6de981b873dcf40445c1eff2bc3716325545 not found: ID does not exist" Jan 21 16:10:12 crc kubenswrapper[4902]: I0121 16:10:12.310718 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" path="/var/lib/kubelet/pods/f87b7e66-2e90-42f0-babb-fc5013fa6077/volumes" Jan 21 16:10:17 crc kubenswrapper[4902]: I0121 16:10:17.770425 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:17 crc kubenswrapper[4902]: I0121 16:10:17.771245 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.678618 4902 scope.go:117] "RemoveContainer" containerID="311b61cd815d9e9e4c95e8d3428eb904438d2e7a6efb54993e589d294d8780c4" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.717602 4902 scope.go:117] "RemoveContainer" containerID="401f56f07810074a750a97f4da0d7c60e93e7a8c193e6d8365b52546dfbecc13" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.763927 4902 scope.go:117] "RemoveContainer" containerID="c548aa5ba6d350e77b6beec3d64af186cf452dd8633be8614338761c7800ca06" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.787510 4902 scope.go:117] "RemoveContainer" containerID="d712f1b4cdf6346532f6de92bb64a6956b68ba70087482d2c995c46acdeba1e0" Jan 21 16:10:24 crc kubenswrapper[4902]: I0121 16:10:24.830908 4902 scope.go:117] "RemoveContainer" containerID="aba9f698bb3c03d4e31ec5eca5323d9be2568c046cc99860cd7803581de5e34e" Jan 21 16:10:35 crc kubenswrapper[4902]: I0121 16:10:35.628661 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:35 crc kubenswrapper[4902]: I0121 16:10:35.630586 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-856775b9dd-twjxc" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.771328 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.771903 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.771957 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.773277 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:10:47 crc kubenswrapper[4902]: I0121 16:10:47.773342 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b" gracePeriod=600 Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.843565 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b" exitCode=0 Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.844111 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b"} Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.844145 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac"} Jan 21 16:10:48 crc kubenswrapper[4902]: I0121 16:10:48.844166 4902 scope.go:117] "RemoveContainer" containerID="285a72291cecfe5325de527c229d6d43b986b29583f243c6083f83854e38ab6e" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.290627 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:10:56 crc kubenswrapper[4902]: E0121 16:10:56.291575 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="init" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.291594 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="init" Jan 21 16:10:56 crc kubenswrapper[4902]: E0121 16:10:56.291611 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.291619 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.291850 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87b7e66-2e90-42f0-babb-fc5013fa6077" containerName="dnsmasq-dns" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.292597 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.305929 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.380851 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.382241 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.391328 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.392275 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.392537 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.491006 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.492099 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.493998 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494736 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494866 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.494984 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.495653 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.512672 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.521526 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"nova-api-db-create-8csjv\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.595328 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596527 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.596734 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.597394 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.607118 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.633777 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.636690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"nova-cell0-db-create-974x9\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.695634 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.696272 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697223 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697464 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697506 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.697591 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.698517 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.700374 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.707564 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.715728 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"nova-api-3bb8-account-create-update-k967z\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.799752 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.800088 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.800119 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.800174 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.801147 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.815846 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.827065 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"nova-cell1-db-create-9kql9\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.901537 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.901597 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.903036 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.905649 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.907198 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.910058 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.918454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"nova-cell0-4fdb-account-create-update-4c46m\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.927578 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:10:56 crc kubenswrapper[4902]: I0121 16:10:56.994016 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.005592 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.005720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.107743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.107835 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.109173 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.141820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.161272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"nova-cell1-6d31-account-create-update-v52m2\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.181202 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.230212 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.268308 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:10:57 crc kubenswrapper[4902]: W0121 16:10:57.276249 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe639d2_1844_47b8_b4c8_3b602547070a.slice/crio-7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6 WatchSource:0}: Error finding container 7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6: Status 404 returned error can't find the container with id 7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.307476 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:10:57 crc kubenswrapper[4902]: W0121 16:10:57.314933 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c847ba2_4e65_4677_b8b6_514162b0c1bc.slice/crio-cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969 WatchSource:0}: Error finding container cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969: Status 404 returned error can't find the container with id cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.491196 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.603519 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.779657 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.939376 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kql9" event={"ID":"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172","Type":"ContainerStarted","Data":"99ec12ef319b65df6402a210f267dd882457bc1525334bf5d8dcc815a06bbc60"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.942945 4902 generic.go:334] "Generic (PLEG): container finished" podID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerID="99ee9f7749f725c9768c807df30815b54542175e3f04ac09d8600799af1e8a19" exitCode=0 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.943014 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-974x9" event={"ID":"fbe639d2-1844-47b8-b4c8-3b602547070a","Type":"ContainerDied","Data":"99ee9f7749f725c9768c807df30815b54542175e3f04ac09d8600799af1e8a19"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.943097 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-974x9" event={"ID":"fbe639d2-1844-47b8-b4c8-3b602547070a","Type":"ContainerStarted","Data":"7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.944256 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" event={"ID":"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58","Type":"ContainerStarted","Data":"98651171c7b939f6328de68d0a540446143d9ad43bf62668613678d3ae0d8135"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.946106 4902 generic.go:334] "Generic (PLEG): container finished" podID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerID="a9669cf760ec41fe8c9ac56172de1dfc2733858ea7763d6ffbfc15c535c182ce" exitCode=0 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.946161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8csjv" event={"ID":"5963807a-fc48-485b-a3a5-7b07791dfdd0","Type":"ContainerDied","Data":"a9669cf760ec41fe8c9ac56172de1dfc2733858ea7763d6ffbfc15c535c182ce"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.946181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8csjv" event={"ID":"5963807a-fc48-485b-a3a5-7b07791dfdd0","Type":"ContainerStarted","Data":"07fcf8f0edbd4f84bc91164ded641268ec9af1fe660812bf6c9ef74d84b50a42"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.948712 4902 generic.go:334] "Generic (PLEG): container finished" podID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerID="f7c278e1da3c54353778da6f63a10b5d381146af280b9714be7ae6c71d2e3772" exitCode=0 Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.948771 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3bb8-account-create-update-k967z" event={"ID":"9c847ba2-4e65-4677-b8b6-514162b0c1bc","Type":"ContainerDied","Data":"f7c278e1da3c54353778da6f63a10b5d381146af280b9714be7ae6c71d2e3772"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.948800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3bb8-account-create-update-k967z" event={"ID":"9c847ba2-4e65-4677-b8b6-514162b0c1bc","Type":"ContainerStarted","Data":"cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969"} Jan 21 16:10:57 crc kubenswrapper[4902]: I0121 16:10:57.950070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" event={"ID":"e4f58498-29bd-47d8-8af1-ac98b4a9f510","Type":"ContainerStarted","Data":"fb4a64e2025c12ae8adaca5cf9a94c80a4cbadedb759b77ea3de8530b331e28a"} Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.968692 4902 generic.go:334] "Generic (PLEG): container finished" podID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerID="1b0ff0cc281058854299a37c0eae467595b367d385ca015e5d0368dda142849e" exitCode=0 Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.968907 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" event={"ID":"e4f58498-29bd-47d8-8af1-ac98b4a9f510","Type":"ContainerDied","Data":"1b0ff0cc281058854299a37c0eae467595b367d385ca015e5d0368dda142849e"} Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.972457 4902 generic.go:334] "Generic (PLEG): container finished" podID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerID="e2e258a3a1605851e7cb0ee36afe37bb54f98c9526d53b997a37f6c2cacd6192" exitCode=0 Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.972558 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kql9" event={"ID":"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172","Type":"ContainerDied","Data":"e2e258a3a1605851e7cb0ee36afe37bb54f98c9526d53b997a37f6c2cacd6192"} Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.976767 4902 generic.go:334] "Generic (PLEG): container finished" podID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerID="9e04cfcc3e9b81819b9ca08bf91b4f4038827b55094f93cb2cd3586ac9a3d537" exitCode=0 Jan 21 16:10:58 crc kubenswrapper[4902]: I0121 16:10:58.977015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" event={"ID":"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58","Type":"ContainerDied","Data":"9e04cfcc3e9b81819b9ca08bf91b4f4038827b55094f93cb2cd3586ac9a3d537"} Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.379944 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.459431 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.471431 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556119 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") pod \"5963807a-fc48-485b-a3a5-7b07791dfdd0\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556274 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") pod \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") pod \"fbe639d2-1844-47b8-b4c8-3b602547070a\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556363 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") pod \"5963807a-fc48-485b-a3a5-7b07791dfdd0\" (UID: \"5963807a-fc48-485b-a3a5-7b07791dfdd0\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556732 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") pod \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\" (UID: \"9c847ba2-4e65-4677-b8b6-514162b0c1bc\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") pod \"fbe639d2-1844-47b8-b4c8-3b602547070a\" (UID: \"fbe639d2-1844-47b8-b4c8-3b602547070a\") " Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556835 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5963807a-fc48-485b-a3a5-7b07791dfdd0" (UID: "5963807a-fc48-485b-a3a5-7b07791dfdd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.556991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c847ba2-4e65-4677-b8b6-514162b0c1bc" (UID: "9c847ba2-4e65-4677-b8b6-514162b0c1bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbe639d2-1844-47b8-b4c8-3b602547070a" (UID: "fbe639d2-1844-47b8-b4c8-3b602547070a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557645 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c847ba2-4e65-4677-b8b6-514162b0c1bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557684 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5963807a-fc48-485b-a3a5-7b07791dfdd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.557693 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe639d2-1844-47b8-b4c8-3b602547070a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.561953 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht" (OuterVolumeSpecName: "kube-api-access-d4lht") pod "5963807a-fc48-485b-a3a5-7b07791dfdd0" (UID: "5963807a-fc48-485b-a3a5-7b07791dfdd0"). InnerVolumeSpecName "kube-api-access-d4lht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.563559 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5" (OuterVolumeSpecName: "kube-api-access-2bnq5") pod "9c847ba2-4e65-4677-b8b6-514162b0c1bc" (UID: "9c847ba2-4e65-4677-b8b6-514162b0c1bc"). InnerVolumeSpecName "kube-api-access-2bnq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.571839 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd" (OuterVolumeSpecName: "kube-api-access-26phd") pod "fbe639d2-1844-47b8-b4c8-3b602547070a" (UID: "fbe639d2-1844-47b8-b4c8-3b602547070a"). InnerVolumeSpecName "kube-api-access-26phd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.659608 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bnq5\" (UniqueName: \"kubernetes.io/projected/9c847ba2-4e65-4677-b8b6-514162b0c1bc-kube-api-access-2bnq5\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.659824 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lht\" (UniqueName: \"kubernetes.io/projected/5963807a-fc48-485b-a3a5-7b07791dfdd0-kube-api-access-d4lht\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.659910 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26phd\" (UniqueName: \"kubernetes.io/projected/fbe639d2-1844-47b8-b4c8-3b602547070a-kube-api-access-26phd\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.987862 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-974x9" event={"ID":"fbe639d2-1844-47b8-b4c8-3b602547070a","Type":"ContainerDied","Data":"7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6"} Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.988249 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f37d779c700826fe60bdfa1a4ef8f061e8d1556831ebeec2ca1fc724c5892c6" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.988315 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-974x9" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.996566 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8csjv" event={"ID":"5963807a-fc48-485b-a3a5-7b07791dfdd0","Type":"ContainerDied","Data":"07fcf8f0edbd4f84bc91164ded641268ec9af1fe660812bf6c9ef74d84b50a42"} Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.996605 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fcf8f0edbd4f84bc91164ded641268ec9af1fe660812bf6c9ef74d84b50a42" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.996609 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8csjv" Jan 21 16:10:59 crc kubenswrapper[4902]: I0121 16:10:59.999364 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3bb8-account-create-update-k967z" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.000160 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3bb8-account-create-update-k967z" event={"ID":"9c847ba2-4e65-4677-b8b6-514162b0c1bc","Type":"ContainerDied","Data":"cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969"} Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.000191 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa1f35b10366d8ca884c346ca0693b8589dcc18b17aa9639bcf675816b48969" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.497984 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.510766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.517981 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678596 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") pod \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678706 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") pod \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678782 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") pod \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678825 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") pod \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\" (UID: \"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678865 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") pod \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\" (UID: \"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.678882 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") pod \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\" (UID: \"e4f58498-29bd-47d8-8af1-ac98b4a9f510\") " Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" (UID: "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" (UID: "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679940 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.679964 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.680495 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4f58498-29bd-47d8-8af1-ac98b4a9f510" (UID: "e4f58498-29bd-47d8-8af1-ac98b4a9f510"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.684139 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4" (OuterVolumeSpecName: "kube-api-access-f6kv4") pod "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" (UID: "ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172"). InnerVolumeSpecName "kube-api-access-f6kv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.691900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs" (OuterVolumeSpecName: "kube-api-access-jlrcs") pod "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" (UID: "bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58"). InnerVolumeSpecName "kube-api-access-jlrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.692354 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9" (OuterVolumeSpecName: "kube-api-access-v7hr9") pod "e4f58498-29bd-47d8-8af1-ac98b4a9f510" (UID: "e4f58498-29bd-47d8-8af1-ac98b4a9f510"). InnerVolumeSpecName "kube-api-access-v7hr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.781945 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlrcs\" (UniqueName: \"kubernetes.io/projected/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58-kube-api-access-jlrcs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.782318 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4f58498-29bd-47d8-8af1-ac98b4a9f510-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.782464 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hr9\" (UniqueName: \"kubernetes.io/projected/e4f58498-29bd-47d8-8af1-ac98b4a9f510-kube-api-access-v7hr9\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:00 crc kubenswrapper[4902]: I0121 16:11:00.782607 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6kv4\" (UniqueName: \"kubernetes.io/projected/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172-kube-api-access-f6kv4\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.012512 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.012537 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6d31-account-create-update-v52m2" event={"ID":"e4f58498-29bd-47d8-8af1-ac98b4a9f510","Type":"ContainerDied","Data":"fb4a64e2025c12ae8adaca5cf9a94c80a4cbadedb759b77ea3de8530b331e28a"} Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.013651 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4a64e2025c12ae8adaca5cf9a94c80a4cbadedb759b77ea3de8530b331e28a" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.015709 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9kql9" event={"ID":"ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172","Type":"ContainerDied","Data":"99ec12ef319b65df6402a210f267dd882457bc1525334bf5d8dcc815a06bbc60"} Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.015768 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ec12ef319b65df6402a210f267dd882457bc1525334bf5d8dcc815a06bbc60" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.015889 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9kql9" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.018391 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" event={"ID":"bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58","Type":"ContainerDied","Data":"98651171c7b939f6328de68d0a540446143d9ad43bf62668613678d3ae0d8135"} Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.018433 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98651171c7b939f6328de68d0a540446143d9ad43bf62668613678d3ae0d8135" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.018520 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4fdb-account-create-update-4c46m" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964258 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964735 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964755 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964784 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964793 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964805 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964814 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964835 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964844 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964863 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964872 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: E0121 16:11:01.964889 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.964898 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965125 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965148 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965169 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965192 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" containerName="mariadb-account-create-update" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965205 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965219 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" containerName="mariadb-database-create" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.965945 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.968170 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-frtf7" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.968228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.968373 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 16:11:01 crc kubenswrapper[4902]: I0121 16:11:01.974791 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.105935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.106009 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.106194 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.106222 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.207881 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.207931 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.208060 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.208103 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.212780 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.212885 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.213771 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.238180 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"nova-cell0-conductor-db-sync-zr8nj\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.282339 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:02 crc kubenswrapper[4902]: I0121 16:11:02.757460 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:11:03 crc kubenswrapper[4902]: I0121 16:11:03.034291 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerStarted","Data":"a6ae0e388b560d80c76d474d9e559dfd6b82ed31121bdddca1d8b03c2f3ee0f8"} Jan 21 16:11:04 crc kubenswrapper[4902]: I0121 16:11:04.043072 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerStarted","Data":"432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9"} Jan 21 16:11:04 crc kubenswrapper[4902]: I0121 16:11:04.065420 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" podStartSLOduration=3.065398427 podStartE2EDuration="3.065398427s" podCreationTimestamp="2026-01-21 16:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:04.060196091 +0000 UTC m=+5826.137029140" watchObservedRunningTime="2026-01-21 16:11:04.065398427 +0000 UTC m=+5826.142231456" Jan 21 16:11:09 crc kubenswrapper[4902]: I0121 16:11:09.092372 4902 generic.go:334] "Generic (PLEG): container finished" podID="76e6442c-e6fd-498e-b20d-e994574644ea" containerID="432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9" exitCode=0 Jan 21 16:11:09 crc kubenswrapper[4902]: I0121 16:11:09.092472 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerDied","Data":"432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9"} Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.468488 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.576834 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.577069 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.578312 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.578864 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") pod \"76e6442c-e6fd-498e-b20d-e994574644ea\" (UID: \"76e6442c-e6fd-498e-b20d-e994574644ea\") " Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.583134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf" (OuterVolumeSpecName: "kube-api-access-gm9wf") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "kube-api-access-gm9wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.585617 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts" (OuterVolumeSpecName: "scripts") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.604306 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.612091 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data" (OuterVolumeSpecName: "config-data") pod "76e6442c-e6fd-498e-b20d-e994574644ea" (UID: "76e6442c-e6fd-498e-b20d-e994574644ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682635 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682666 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm9wf\" (UniqueName: \"kubernetes.io/projected/76e6442c-e6fd-498e-b20d-e994574644ea-kube-api-access-gm9wf\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682679 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:10 crc kubenswrapper[4902]: I0121 16:11:10.682687 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e6442c-e6fd-498e-b20d-e994574644ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.115877 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" event={"ID":"76e6442c-e6fd-498e-b20d-e994574644ea","Type":"ContainerDied","Data":"a6ae0e388b560d80c76d474d9e559dfd6b82ed31121bdddca1d8b03c2f3ee0f8"} Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.115936 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zr8nj" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.115943 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ae0e388b560d80c76d474d9e559dfd6b82ed31121bdddca1d8b03c2f3ee0f8" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.216219 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:11:11 crc kubenswrapper[4902]: E0121 16:11:11.216612 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" containerName="nova-cell0-conductor-db-sync" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.216631 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" containerName="nova-cell0-conductor-db-sync" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.216818 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" containerName="nova-cell0-conductor-db-sync" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.217446 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.222571 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.222835 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-frtf7" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.226431 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.295660 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.295740 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.295886 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shn78\" (UniqueName: \"kubernetes.io/projected/61fa221c-a236-471b-a3ca-0efc339d0fcc-kube-api-access-shn78\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.397463 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.397891 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.398449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shn78\" (UniqueName: \"kubernetes.io/projected/61fa221c-a236-471b-a3ca-0efc339d0fcc-kube-api-access-shn78\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.401292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.412668 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61fa221c-a236-471b-a3ca-0efc339d0fcc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.415250 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shn78\" (UniqueName: \"kubernetes.io/projected/61fa221c-a236-471b-a3ca-0efc339d0fcc-kube-api-access-shn78\") pod \"nova-cell0-conductor-0\" (UID: \"61fa221c-a236-471b-a3ca-0efc339d0fcc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.536905 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:11 crc kubenswrapper[4902]: I0121 16:11:11.967519 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:11:11 crc kubenswrapper[4902]: W0121 16:11:11.967989 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61fa221c_a236_471b_a3ca_0efc339d0fcc.slice/crio-04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017 WatchSource:0}: Error finding container 04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017: Status 404 returned error can't find the container with id 04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017 Jan 21 16:11:12 crc kubenswrapper[4902]: I0121 16:11:12.126033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61fa221c-a236-471b-a3ca-0efc339d0fcc","Type":"ContainerStarted","Data":"04bd2cb2d11ee246f2fb6729138c92051305552a6c5ec2e1178872bba7d94017"} Jan 21 16:11:13 crc kubenswrapper[4902]: I0121 16:11:13.138036 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61fa221c-a236-471b-a3ca-0efc339d0fcc","Type":"ContainerStarted","Data":"10ad884c092a8180f4ad83a6625db58b7cfd28f41342f72b2c36f4abf6c61ace"} Jan 21 16:11:13 crc kubenswrapper[4902]: I0121 16:11:13.138391 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:13 crc kubenswrapper[4902]: I0121 16:11:13.170447 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.170424265 podStartE2EDuration="2.170424265s" podCreationTimestamp="2026-01-21 16:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:13.158228632 +0000 UTC m=+5835.235061691" watchObservedRunningTime="2026-01-21 16:11:13.170424265 +0000 UTC m=+5835.247257304" Jan 21 16:11:21 crc kubenswrapper[4902]: I0121 16:11:21.572225 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.005381 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.006686 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.009557 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.009580 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.021687 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129080 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129167 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129260 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.129298 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.178658 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.180268 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.183506 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.190573 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.192076 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.197593 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.204646 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.217665 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231625 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231700 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.231873 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.241930 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.245406 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.251740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.258062 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.259859 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.264698 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.276017 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"nova-cell0-cell-mapping-7ld7m\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.289253 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333420 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333479 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333508 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333541 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333565 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333617 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333654 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333686 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333715 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.333980 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.335465 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.384596 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.386075 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.394684 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.418293 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435606 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435668 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435696 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435734 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435759 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435811 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435900 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.435932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.436123 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.436733 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.444097 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.446977 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.455514 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.456527 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.462784 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.469250 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.473802 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"nova-api-0\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.475455 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"nova-cell1-novncproxy-0\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.476325 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"nova-scheduler-0\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.486803 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.488478 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.504548 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.515529 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.537214 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538388 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538468 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538616 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538730 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538756 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.538982 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641176 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641233 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641286 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641325 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641363 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641420 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641513 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.641574 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.642889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.644392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.644581 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.645151 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.645419 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.653783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.653915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.662486 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"nova-metadata-0\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.667321 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"dnsmasq-dns-66f49c7d99-gbqjj\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.694400 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.864315 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.881176 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:22 crc kubenswrapper[4902]: I0121 16:11:22.941363 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:11:22 crc kubenswrapper[4902]: W0121 16:11:22.952303 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeebb97d_c56a_4c7d_8ec0_f9982f9c2e32.slice/crio-aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724 WatchSource:0}: Error finding container aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724: Status 404 returned error can't find the container with id aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724 Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.106014 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.123760 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.204452 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.212444 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.221263 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.221644 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.237283 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.250178 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.254994 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.255130 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.255169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.255249 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.297198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerStarted","Data":"7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.297424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerStarted","Data":"aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.304388 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerStarted","Data":"9ebe5c00f1a81b515c7ecc716c300b5811813ab974f7f4bd90b9fc00489cfc97"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.306085 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerStarted","Data":"0ac3900103983dac93de1f7fd1bd8a8ee9bd704671df4483ae644f05d4a22117"} Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.319119 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7ld7m" podStartSLOduration=2.319098496 podStartE2EDuration="2.319098496s" podCreationTimestamp="2026-01-21 16:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:23.309853206 +0000 UTC m=+5845.386686235" watchObservedRunningTime="2026-01-21 16:11:23.319098496 +0000 UTC m=+5845.395931525" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.356538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.356797 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.357030 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.357213 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.362280 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.365559 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.366254 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.379049 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"nova-cell1-conductor-db-sync-mqjfk\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.418783 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.497863 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:23 crc kubenswrapper[4902]: I0121 16:11:23.526786 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:24 crc kubenswrapper[4902]: W0121 16:11:24.120646 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fafbdf5_1100_4f6f_831e_c7dd0fc63586.slice/crio-a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19 WatchSource:0}: Error finding container a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19: Status 404 returned error can't find the container with id a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19 Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.124880 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.320366 4902 generic.go:334] "Generic (PLEG): container finished" podID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" exitCode=0 Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.320454 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerDied","Data":"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.320481 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerStarted","Data":"3e67c0e6a35688e67c9ae97a666562e7a06fee4e64265deb351ad6f1c7a1f81e"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.367366 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerStarted","Data":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.367423 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerStarted","Data":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.367435 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerStarted","Data":"4b639bfa91be42f81663a77a5e76c1832f5e50df04be72677151e02c7b0de405"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.376926 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerStarted","Data":"a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.380480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerStarted","Data":"31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.395925 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.395905853 podStartE2EDuration="2.395905853s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.385996155 +0000 UTC m=+5846.462829184" watchObservedRunningTime="2026-01-21 16:11:24.395905853 +0000 UTC m=+5846.472738882" Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.403631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerStarted","Data":"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.409361 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.409345881 podStartE2EDuration="2.409345881s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.407977772 +0000 UTC m=+5846.484810801" watchObservedRunningTime="2026-01-21 16:11:24.409345881 +0000 UTC m=+5846.486178910" Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.411329 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerStarted","Data":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.411374 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerStarted","Data":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.411385 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerStarted","Data":"70ca050dfe064f0adfae2ee5b6a0be6a9d2b4a8c56771dced41717611bd3cc98"} Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.440628 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.44060604 podStartE2EDuration="2.44060604s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.431984497 +0000 UTC m=+5846.508817546" watchObservedRunningTime="2026-01-21 16:11:24.44060604 +0000 UTC m=+5846.517439069" Jan 21 16:11:24 crc kubenswrapper[4902]: I0121 16:11:24.458428 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.45840441 podStartE2EDuration="2.45840441s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:24.446746482 +0000 UTC m=+5846.523579511" watchObservedRunningTime="2026-01-21 16:11:24.45840441 +0000 UTC m=+5846.535237439" Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.433869 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerStarted","Data":"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2"} Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.435330 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.445733 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerStarted","Data":"889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9"} Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.465910 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" podStartSLOduration=3.465888967 podStartE2EDuration="3.465888967s" podCreationTimestamp="2026-01-21 16:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:25.455428133 +0000 UTC m=+5847.532261172" watchObservedRunningTime="2026-01-21 16:11:25.465888967 +0000 UTC m=+5847.542721996" Jan 21 16:11:25 crc kubenswrapper[4902]: I0121 16:11:25.472527 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" podStartSLOduration=2.472492012 podStartE2EDuration="2.472492012s" podCreationTimestamp="2026-01-21 16:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:25.470915158 +0000 UTC m=+5847.547748187" watchObservedRunningTime="2026-01-21 16:11:25.472492012 +0000 UTC m=+5847.549325041" Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.388137 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.451474 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" containerID="cri-o://40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" gracePeriod=30 Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.451617 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" containerID="cri-o://537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" gracePeriod=30 Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.459096 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:26 crc kubenswrapper[4902]: I0121 16:11:26.459306 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" gracePeriod=30 Jan 21 16:11:26 crc kubenswrapper[4902]: E0121 16:11:26.774148 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31a37d5_535d_42a2_85bd_29497224ebb2.slice/crio-537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.283259 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.372776 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") pod \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.372852 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") pod \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.372882 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") pod \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\" (UID: \"11b80ea3-f5a8-48c8-ba60-d26265f71a6b\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.383710 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8" (OuterVolumeSpecName: "kube-api-access-5m2j8") pod "11b80ea3-f5a8-48c8-ba60-d26265f71a6b" (UID: "11b80ea3-f5a8-48c8-ba60-d26265f71a6b"). InnerVolumeSpecName "kube-api-access-5m2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.413228 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11b80ea3-f5a8-48c8-ba60-d26265f71a6b" (UID: "11b80ea3-f5a8-48c8-ba60-d26265f71a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.428459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data" (OuterVolumeSpecName: "config-data") pod "11b80ea3-f5a8-48c8-ba60-d26265f71a6b" (UID: "11b80ea3-f5a8-48c8-ba60-d26265f71a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.457443 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462198 4902 generic.go:334] "Generic (PLEG): container finished" podID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" exitCode=0 Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerDied","Data":"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462294 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462342 4902 scope.go:117] "RemoveContainer" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.462331 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"11b80ea3-f5a8-48c8-ba60-d26265f71a6b","Type":"ContainerDied","Data":"0ac3900103983dac93de1f7fd1bd8a8ee9bd704671df4483ae644f05d4a22117"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464014 4902 generic.go:334] "Generic (PLEG): container finished" podID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" exitCode=0 Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464033 4902 generic.go:334] "Generic (PLEG): container finished" podID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" exitCode=143 Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464062 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerDied","Data":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464093 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerDied","Data":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f31a37d5-535d-42a2-85bd-29497224ebb2","Type":"ContainerDied","Data":"70ca050dfe064f0adfae2ee5b6a0be6a9d2b4a8c56771dced41717611bd3cc98"} Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.464111 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.475287 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.475555 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.475633 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.476084 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs" (OuterVolumeSpecName: "logs") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.476709 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") pod \"f31a37d5-535d-42a2-85bd-29497224ebb2\" (UID: \"f31a37d5-535d-42a2-85bd-29497224ebb2\") " Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478296 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31a37d5-535d-42a2-85bd-29497224ebb2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478330 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m2j8\" (UniqueName: \"kubernetes.io/projected/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-kube-api-access-5m2j8\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478346 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.478359 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b80ea3-f5a8-48c8-ba60-d26265f71a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.479801 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m" (OuterVolumeSpecName: "kube-api-access-vb79m") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "kube-api-access-vb79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.502288 4902 scope.go:117] "RemoveContainer" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.502736 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a\": container with ID starting with 45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a not found: ID does not exist" containerID="45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.502770 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a"} err="failed to get container status \"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a\": rpc error: code = NotFound desc = could not find container \"45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a\": container with ID starting with 45689293f13394dfae807bd04db16dcf5f88745e68f80a5caa245ce3a17f317a not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.502791 4902 scope.go:117] "RemoveContainer" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.505387 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.512276 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.516610 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.525289 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.532415 4902 scope.go:117] "RemoveContainer" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.541096 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data" (OuterVolumeSpecName: "config-data") pod "f31a37d5-535d-42a2-85bd-29497224ebb2" (UID: "f31a37d5-535d-42a2-85bd-29497224ebb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.542673 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.543035 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543065 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.543094 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543103 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.543120 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543126 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543300 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-metadata" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543327 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" containerName="nova-metadata-log" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.543336 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.544032 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.550821 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.551034 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.551184 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.565998 4902 scope.go:117] "RemoveContainer" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.569977 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": container with ID starting with 537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55 not found: ID does not exist" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.570016 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} err="failed to get container status \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": rpc error: code = NotFound desc = could not find container \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": container with ID starting with 537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.570056 4902 scope.go:117] "RemoveContainer" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.573304 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: E0121 16:11:27.575197 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": container with ID starting with 40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45 not found: ID does not exist" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.575242 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} err="failed to get container status \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": rpc error: code = NotFound desc = could not find container \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": container with ID starting with 40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.575272 4902 scope.go:117] "RemoveContainer" containerID="537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.579174 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55"} err="failed to get container status \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": rpc error: code = NotFound desc = could not find container \"537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55\": container with ID starting with 537fbe043394c40ecf72493a39470e9b9453a5a159b6c4296563b98fbb115c55 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.579212 4902 scope.go:117] "RemoveContainer" containerID="40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585464 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrnt\" (UniqueName: \"kubernetes.io/projected/78825018-5d0a-4fe7-83c7-ef79700642cd-kube-api-access-2lrnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585646 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585773 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb79m\" (UniqueName: \"kubernetes.io/projected/f31a37d5-535d-42a2-85bd-29497224ebb2-kube-api-access-vb79m\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585792 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.585806 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31a37d5-535d-42a2-85bd-29497224ebb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.586189 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45"} err="failed to get container status \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": rpc error: code = NotFound desc = could not find container \"40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45\": container with ID starting with 40a32047ac37804761575d1826367393c05a4c89d3347ca480635af9d8524e45 not found: ID does not exist" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688254 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lrnt\" (UniqueName: \"kubernetes.io/projected/78825018-5d0a-4fe7-83c7-ef79700642cd-kube-api-access-2lrnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688338 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688413 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688448 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.688521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.696249 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.700490 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.700606 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.712703 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lrnt\" (UniqueName: \"kubernetes.io/projected/78825018-5d0a-4fe7-83c7-ef79700642cd-kube-api-access-2lrnt\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.713396 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/78825018-5d0a-4fe7-83c7-ef79700642cd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"78825018-5d0a-4fe7-83c7-ef79700642cd\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.830612 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.843552 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.854189 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.855654 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.862517 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.862683 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.869516 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892459 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892554 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892583 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892606 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.892643 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.933352 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994552 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994578 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994605 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.994645 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:27 crc kubenswrapper[4902]: I0121 16:11:27.997550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.001611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.001611 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.002711 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.015705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"nova-metadata-0\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.184378 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.312418 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b80ea3-f5a8-48c8-ba60-d26265f71a6b" path="/var/lib/kubelet/pods/11b80ea3-f5a8-48c8-ba60-d26265f71a6b/volumes" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.313100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31a37d5-535d-42a2-85bd-29497224ebb2" path="/var/lib/kubelet/pods/f31a37d5-535d-42a2-85bd-29497224ebb2/volumes" Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.463175 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:11:28 crc kubenswrapper[4902]: W0121 16:11:28.468876 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78825018_5d0a_4fe7_83c7_ef79700642cd.slice/crio-14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c WatchSource:0}: Error finding container 14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c: Status 404 returned error can't find the container with id 14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.478103 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerDied","Data":"889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9"} Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.478039 4902 generic.go:334] "Generic (PLEG): container finished" podID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerID="889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9" exitCode=0 Jan 21 16:11:28 crc kubenswrapper[4902]: I0121 16:11:28.620753 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.490813 4902 generic.go:334] "Generic (PLEG): container finished" podID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerID="7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929" exitCode=0 Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.491357 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerDied","Data":"7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.493478 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerStarted","Data":"6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.493549 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerStarted","Data":"badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.493566 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerStarted","Data":"c7e7aaf7977a7ad1cca77abff20f575bad05c5750d7ec60c2c6c5384633a215a"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.495578 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78825018-5d0a-4fe7-83c7-ef79700642cd","Type":"ContainerStarted","Data":"d3fba8ddc406502ea544213a2eb84d3faf6ab2405d901c458926b932c0b86ae7"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.495613 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"78825018-5d0a-4fe7-83c7-ef79700642cd","Type":"ContainerStarted","Data":"14c732739398993b1ed2c099355743d5d42d0e1130b24e86f9bbbb3a83a28a2c"} Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.557637 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.557613984 podStartE2EDuration="2.557613984s" podCreationTimestamp="2026-01-21 16:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:29.538442585 +0000 UTC m=+5851.615275624" watchObservedRunningTime="2026-01-21 16:11:29.557613984 +0000 UTC m=+5851.634447013" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.583859 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.583834281 podStartE2EDuration="2.583834281s" podCreationTimestamp="2026-01-21 16:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:29.575773015 +0000 UTC m=+5851.652606044" watchObservedRunningTime="2026-01-21 16:11:29.583834281 +0000 UTC m=+5851.660667320" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.905380 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.930862 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.930999 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.931089 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.931174 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") pod \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\" (UID: \"6fafbdf5-1100-4f6f-831e-c7dd0fc63586\") " Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.936252 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn" (OuterVolumeSpecName: "kube-api-access-vj5cn") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "kube-api-access-vj5cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.936522 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts" (OuterVolumeSpecName: "scripts") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.974501 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data" (OuterVolumeSpecName: "config-data") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:29 crc kubenswrapper[4902]: I0121 16:11:29.980745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fafbdf5-1100-4f6f-831e-c7dd0fc63586" (UID: "6fafbdf5-1100-4f6f-831e-c7dd0fc63586"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034040 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj5cn\" (UniqueName: \"kubernetes.io/projected/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-kube-api-access-vj5cn\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034099 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034112 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.034126 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fafbdf5-1100-4f6f-831e-c7dd0fc63586-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.510707 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.510631 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mqjfk" event={"ID":"6fafbdf5-1100-4f6f-831e-c7dd0fc63586","Type":"ContainerDied","Data":"a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19"} Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.510766 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74358b1c5a77d77722a14ca051562ae9ede85ed8984c1a6ea4d025963ae5d19" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.614977 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:11:30 crc kubenswrapper[4902]: E0121 16:11:30.615485 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerName="nova-cell1-conductor-db-sync" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.615504 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerName="nova-cell1-conductor-db-sync" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.615778 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" containerName="nova-cell1-conductor-db-sync" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.616536 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.616632 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.634203 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.647386 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgv4h\" (UniqueName: \"kubernetes.io/projected/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-kube-api-access-zgv4h\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.647806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.647899 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.748765 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgv4h\" (UniqueName: \"kubernetes.io/projected/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-kube-api-access-zgv4h\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.748813 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.748882 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.766081 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.766456 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.767819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgv4h\" (UniqueName: \"kubernetes.io/projected/f7b3d3ef-1806-4318-95f7-eb9cd2526d32-kube-api-access-zgv4h\") pod \"nova-cell1-conductor-0\" (UID: \"f7b3d3ef-1806-4318-95f7-eb9cd2526d32\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.930888 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953102 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953316 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953458 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.953526 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") pod \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\" (UID: \"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32\") " Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.957829 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh" (OuterVolumeSpecName: "kube-api-access-sbbjh") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "kube-api-access-sbbjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.960582 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts" (OuterVolumeSpecName: "scripts") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:30 crc kubenswrapper[4902]: I0121 16:11:30.968148 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.000946 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data" (OuterVolumeSpecName: "config-data") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.007745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" (UID: "beebb97d-c56a-4c7d-8ec0-f9982f9c2e32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055508 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055564 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055575 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.055586 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbbjh\" (UniqueName: \"kubernetes.io/projected/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32-kube-api-access-sbbjh\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.406631 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: W0121 16:11:31.408125 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7b3d3ef_1806_4318_95f7_eb9cd2526d32.slice/crio-37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34 WatchSource:0}: Error finding container 37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34: Status 404 returned error can't find the container with id 37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.520189 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7ld7m" event={"ID":"beebb97d-c56a-4c7d-8ec0-f9982f9c2e32","Type":"ContainerDied","Data":"aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724"} Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.520232 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aff4329b968547b9f0c29b41c0b1ec28e6bda5a703c8e7a44c04923bf8513724" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.520258 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7ld7m" Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.522837 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7b3d3ef-1806-4318-95f7-eb9cd2526d32","Type":"ContainerStarted","Data":"37bcc25cebf61837fdb3efe7f1d84f593d9fd45e8c61da49dcac811a63bcaa34"} Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.717523 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.718307 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" containerID="cri-o://5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.718347 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" containerID="cri-o://1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.745079 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.749138 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" containerID="cri-o://31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.769255 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.769494 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" containerID="cri-o://badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a" gracePeriod=30 Jan 21 16:11:31 crc kubenswrapper[4902]: I0121 16:11:31.769644 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" containerID="cri-o://6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309" gracePeriod=30 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.463279 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560183 4902 generic.go:334] "Generic (PLEG): container finished" podID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" exitCode=0 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560455 4902 generic.go:334] "Generic (PLEG): container finished" podID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" exitCode=143 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560396 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerDied","Data":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560522 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560548 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerDied","Data":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560562 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6e0e21-ab4e-40db-ad7d-fde50926c691","Type":"ContainerDied","Data":"4b639bfa91be42f81663a77a5e76c1832f5e50df04be72677151e02c7b0de405"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.560579 4902 scope.go:117] "RemoveContainer" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568471 4902 generic.go:334] "Generic (PLEG): container finished" podID="06cee7ae-d3df-4c78-8056-2877e835409a" containerID="6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309" exitCode=0 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568505 4902 generic.go:334] "Generic (PLEG): container finished" podID="06cee7ae-d3df-4c78-8056-2877e835409a" containerID="badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a" exitCode=143 Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerDied","Data":"6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.568619 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerDied","Data":"badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.570196 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f7b3d3ef-1806-4318-95f7-eb9cd2526d32","Type":"ContainerStarted","Data":"8ca33c8fc5a0e63e441bf1f4d2ea2248656dab9b2afd65c9c46a409be9c991bf"} Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.571560 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.597251 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.59722799 podStartE2EDuration="2.59722799s" podCreationTimestamp="2026-01-21 16:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:32.589626166 +0000 UTC m=+5854.666459195" watchObservedRunningTime="2026-01-21 16:11:32.59722799 +0000 UTC m=+5854.674061019" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.647396 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.647957 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.648002 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.648110 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") pod \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\" (UID: \"3a6e0e21-ab4e-40db-ad7d-fde50926c691\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.649357 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs" (OuterVolumeSpecName: "logs") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.654395 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g" (OuterVolumeSpecName: "kube-api-access-4pq4g") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "kube-api-access-4pq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.673196 4902 scope.go:117] "RemoveContainer" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.688133 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data" (OuterVolumeSpecName: "config-data") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.691329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6e0e21-ab4e-40db-ad7d-fde50926c691" (UID: "3a6e0e21-ab4e-40db-ad7d-fde50926c691"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.748432 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749300 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749328 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e0e21-ab4e-40db-ad7d-fde50926c691-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749337 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pq4g\" (UniqueName: \"kubernetes.io/projected/3a6e0e21-ab4e-40db-ad7d-fde50926c691-kube-api-access-4pq4g\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.749346 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6e0e21-ab4e-40db-ad7d-fde50926c691-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.757143 4902 scope.go:117] "RemoveContainer" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: E0121 16:11:32.760769 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": container with ID starting with 1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82 not found: ID does not exist" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.760808 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} err="failed to get container status \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": rpc error: code = NotFound desc = could not find container \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": container with ID starting with 1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82 not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.760829 4902 scope.go:117] "RemoveContainer" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: E0121 16:11:32.761186 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": container with ID starting with 5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b not found: ID does not exist" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761222 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} err="failed to get container status \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": rpc error: code = NotFound desc = could not find container \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": container with ID starting with 5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761267 4902 scope.go:117] "RemoveContainer" containerID="1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761495 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82"} err="failed to get container status \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": rpc error: code = NotFound desc = could not find container \"1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82\": container with ID starting with 1d5c76aea43f002af0e01a503ea0f35489e749ad5ff77626fdaa80a4fd518d82 not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761511 4902 scope.go:117] "RemoveContainer" containerID="5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.761682 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b"} err="failed to get container status \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": rpc error: code = NotFound desc = could not find container \"5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b\": container with ID starting with 5d760a9407e2c36eff34b1b54a07052f9d4eda4ac43791636d4d6eff0f6f9a2b not found: ID does not exist" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850109 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850277 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850356 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850431 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.850479 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") pod \"06cee7ae-d3df-4c78-8056-2877e835409a\" (UID: \"06cee7ae-d3df-4c78-8056-2877e835409a\") " Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.851286 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs" (OuterVolumeSpecName: "logs") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.854111 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6" (OuterVolumeSpecName: "kube-api-access-cpjw6") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "kube-api-access-cpjw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.882527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data" (OuterVolumeSpecName: "config-data") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.884267 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.892527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.903929 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "06cee7ae-d3df-4c78-8056-2877e835409a" (UID: "06cee7ae-d3df-4c78-8056-2877e835409a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.941446 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.942852 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954636 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06cee7ae-d3df-4c78-8056-2877e835409a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954676 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954688 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954698 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/06cee7ae-d3df-4c78-8056-2877e835409a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.954708 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpjw6\" (UniqueName: \"kubernetes.io/projected/06cee7ae-d3df-4c78-8056-2877e835409a-kube-api-access-cpjw6\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.969324 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.981372 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:11:32 crc kubenswrapper[4902]: I0121 16:11:32.981680 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ddb658677-chfv4" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" containerID="cri-o://ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" gracePeriod=10 Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000117 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000557 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000571 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000591 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000597 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000611 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000618 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000638 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000644 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.000665 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerName="nova-manage" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000677 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerName="nova-manage" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000873 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" containerName="nova-manage" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000891 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-metadata" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000902 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-api" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000913 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" containerName="nova-metadata-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.000927 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" containerName="nova-api-log" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.002130 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.008798 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.016548 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057347 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057828 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.057898 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160582 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160705 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160786 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.160821 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.161760 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.169388 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.170909 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.187767 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"nova-api-0\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.328759 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.422308 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465556 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465597 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465676 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465738 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.465886 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") pod \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\" (UID: \"9a24ae7c-3fa5-479a-84b4-56ad2792d386\") " Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.472942 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5" (OuterVolumeSpecName: "kube-api-access-w5rm5") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "kube-api-access-w5rm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.526723 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.533448 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.541116 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.541970 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config" (OuterVolumeSpecName: "config") pod "9a24ae7c-3fa5-479a-84b4-56ad2792d386" (UID: "9a24ae7c-3fa5-479a-84b4-56ad2792d386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570005 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5rm5\" (UniqueName: \"kubernetes.io/projected/9a24ae7c-3fa5-479a-84b4-56ad2792d386-kube-api-access-w5rm5\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570077 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570092 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570105 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.570118 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a24ae7c-3fa5-479a-84b4-56ad2792d386-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596385 4902 generic.go:334] "Generic (PLEG): container finished" podID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" exitCode=0 Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596458 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerDied","Data":"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7"} Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596494 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ddb658677-chfv4" event={"ID":"9a24ae7c-3fa5-479a-84b4-56ad2792d386","Type":"ContainerDied","Data":"cdda0da4083f2f0e9099ddfcab1f5b7e57fb3c8539f90cf5b020d3761c23f6b0"} Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596516 4902 scope.go:117] "RemoveContainer" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.596658 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ddb658677-chfv4" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.610273 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.614568 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06cee7ae-d3df-4c78-8056-2877e835409a","Type":"ContainerDied","Data":"c7e7aaf7977a7ad1cca77abff20f575bad05c5750d7ec60c2c6c5384633a215a"} Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.661201 4902 scope.go:117] "RemoveContainer" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.701062 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.715965 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ddb658677-chfv4"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.720592 4902 scope.go:117] "RemoveContainer" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.721191 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7\": container with ID starting with ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7 not found: ID does not exist" containerID="ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721234 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7"} err="failed to get container status \"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7\": rpc error: code = NotFound desc = could not find container \"ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7\": container with ID starting with ab2d18c944578ed9d7842ff06c2de449ceeeb504e08f78ea6d30d3f8ad42cce7 not found: ID does not exist" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721257 4902 scope.go:117] "RemoveContainer" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.721459 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9\": container with ID starting with 84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9 not found: ID does not exist" containerID="84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721478 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9"} err="failed to get container status \"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9\": rpc error: code = NotFound desc = could not find container \"84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9\": container with ID starting with 84bed1e613719e3773c1b9d9b2ea9d7ab951f0b0abeb24309b7e16ecbd52c1a9 not found: ID does not exist" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.721491 4902 scope.go:117] "RemoveContainer" containerID="6db095d59697014f5d540bbdbf584a4f12b528507f890ae6dcc568fbd9d40309" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.752133 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.754891 4902 scope.go:117] "RemoveContainer" containerID="badd389d3e26fbdcc8666e8bf066881f6cf09b62a13c95591c64f06bb805654a" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.763061 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.770807 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.771309 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="init" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.771333 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="init" Jan 21 16:11:33 crc kubenswrapper[4902]: E0121 16:11:33.771365 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.771374 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.771599 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" containerName="dnsmasq-dns" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.773390 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.775626 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.775669 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.782085 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.830458 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:33 crc kubenswrapper[4902]: W0121 16:11:33.832418 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11cfdeec_5c4f_4051_8c8d_3c4c3e648e87.slice/crio-869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff WatchSource:0}: Error finding container 869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff: Status 404 returned error can't find the container with id 869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884580 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884644 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884722 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884762 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.884835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986509 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986612 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986671 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.986699 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.987303 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.992550 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.993446 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:33 crc kubenswrapper[4902]: I0121 16:11:33.993583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.012625 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"nova-metadata-0\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " pod="openstack/nova-metadata-0" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.100263 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.314336 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06cee7ae-d3df-4c78-8056-2877e835409a" path="/var/lib/kubelet/pods/06cee7ae-d3df-4c78-8056-2877e835409a/volumes" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.315769 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6e0e21-ab4e-40db-ad7d-fde50926c691" path="/var/lib/kubelet/pods/3a6e0e21-ab4e-40db-ad7d-fde50926c691/volumes" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.316963 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a24ae7c-3fa5-479a-84b4-56ad2792d386" path="/var/lib/kubelet/pods/9a24ae7c-3fa5-479a-84b4-56ad2792d386/volumes" Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.554055 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:34 crc kubenswrapper[4902]: W0121 16:11:34.555974 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3b10dc_7950_4a5c_a31d_3fc11ce4de05.slice/crio-5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8 WatchSource:0}: Error finding container 5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8: Status 404 returned error can't find the container with id 5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8 Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.622438 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerStarted","Data":"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.622484 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerStarted","Data":"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.622500 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerStarted","Data":"869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.625637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerStarted","Data":"5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8"} Jan 21 16:11:34 crc kubenswrapper[4902]: I0121 16:11:34.647469 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.647447756 podStartE2EDuration="2.647447756s" podCreationTimestamp="2026-01-21 16:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:34.639018349 +0000 UTC m=+5856.715851388" watchObservedRunningTime="2026-01-21 16:11:34.647447756 +0000 UTC m=+5856.724280785" Jan 21 16:11:35 crc kubenswrapper[4902]: I0121 16:11:35.648865 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerStarted","Data":"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e"} Jan 21 16:11:35 crc kubenswrapper[4902]: I0121 16:11:35.649243 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerStarted","Data":"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f"} Jan 21 16:11:35 crc kubenswrapper[4902]: I0121 16:11:35.679067 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.679022571 podStartE2EDuration="2.679022571s" podCreationTimestamp="2026-01-21 16:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:35.676749538 +0000 UTC m=+5857.753582587" watchObservedRunningTime="2026-01-21 16:11:35.679022571 +0000 UTC m=+5857.755855620" Jan 21 16:11:37 crc kubenswrapper[4902]: I0121 16:11:37.934635 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:37 crc kubenswrapper[4902]: I0121 16:11:37.960359 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:38 crc kubenswrapper[4902]: I0121 16:11:38.694257 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:11:39 crc kubenswrapper[4902]: I0121 16:11:39.100981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:11:39 crc kubenswrapper[4902]: I0121 16:11:39.101147 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.005552 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.528129 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.530203 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.533118 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.533404 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.543676 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.645960 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.646028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.646288 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.646622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.748732 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.749380 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.750529 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.750718 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.755624 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.756816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.766454 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.774100 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qd5pv\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:41 crc kubenswrapper[4902]: I0121 16:11:41.863992 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.319009 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:11:42 crc kubenswrapper[4902]: W0121 16:11:42.321018 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87c2e205_1cb6_4b63_89d5_c03370d5cb02.slice/crio-e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33 WatchSource:0}: Error finding container e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33: Status 404 returned error can't find the container with id e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33 Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.719591 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerStarted","Data":"2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed"} Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.719895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerStarted","Data":"e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33"} Jan 21 16:11:42 crc kubenswrapper[4902]: I0121 16:11:42.744076 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qd5pv" podStartSLOduration=1.74403542 podStartE2EDuration="1.74403542s" podCreationTimestamp="2026-01-21 16:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:42.735821849 +0000 UTC m=+5864.812654888" watchObservedRunningTime="2026-01-21 16:11:42.74403542 +0000 UTC m=+5864.820868459" Jan 21 16:11:43 crc kubenswrapper[4902]: I0121 16:11:43.329893 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:11:43 crc kubenswrapper[4902]: I0121 16:11:43.330247 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.101565 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.101609 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.414231 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:44 crc kubenswrapper[4902]: I0121 16:11:44.414231 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:45 crc kubenswrapper[4902]: I0121 16:11:45.118508 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.83:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:45 crc kubenswrapper[4902]: I0121 16:11:45.118516 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.83:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:11:47 crc kubenswrapper[4902]: I0121 16:11:47.773722 4902 generic.go:334] "Generic (PLEG): container finished" podID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerID="2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed" exitCode=0 Jan 21 16:11:47 crc kubenswrapper[4902]: I0121 16:11:47.773797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerDied","Data":"2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed"} Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.131024 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.302698 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.302819 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.302892 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.303013 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") pod \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\" (UID: \"87c2e205-1cb6-4b63-89d5-c03370d5cb02\") " Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.308663 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks" (OuterVolumeSpecName: "kube-api-access-vq8ks") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "kube-api-access-vq8ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.310303 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts" (OuterVolumeSpecName: "scripts") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.331769 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data" (OuterVolumeSpecName: "config-data") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.348871 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87c2e205-1cb6-4b63-89d5-c03370d5cb02" (UID: "87c2e205-1cb6-4b63-89d5-c03370d5cb02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405227 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405265 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405277 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87c2e205-1cb6-4b63-89d5-c03370d5cb02-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.405290 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq8ks\" (UniqueName: \"kubernetes.io/projected/87c2e205-1cb6-4b63-89d5-c03370d5cb02-kube-api-access-vq8ks\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.798536 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qd5pv" event={"ID":"87c2e205-1cb6-4b63-89d5-c03370d5cb02","Type":"ContainerDied","Data":"e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33"} Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.798573 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e424a70566b2f7137d7456e4862b7715141b75fada4e7de5d82ecb3129696b33" Jan 21 16:11:49 crc kubenswrapper[4902]: I0121 16:11:49.798614 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qd5pv" Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.000458 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.000812 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" containerID="cri-o://486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.000932 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" containerID="cri-o://84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.040514 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.040963 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" containerID="cri-o://1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.041134 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" containerID="cri-o://42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" gracePeriod=30 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.808829 4902 generic.go:334] "Generic (PLEG): container finished" podID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" exitCode=143 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.808910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerDied","Data":"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f"} Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.811690 4902 generic.go:334] "Generic (PLEG): container finished" podID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" exitCode=143 Jan 21 16:11:50 crc kubenswrapper[4902]: I0121 16:11:50.811868 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerDied","Data":"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681"} Jan 21 16:12:01 crc kubenswrapper[4902]: I0121 16:12:01.940768 4902 generic.go:334] "Generic (PLEG): container finished" podID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerID="31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b" exitCode=137 Jan 21 16:12:01 crc kubenswrapper[4902]: I0121 16:12:01.940995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerDied","Data":"31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b"} Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.249011 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.277604 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") pod \"57ebee9b-653a-4d49-9002-23c81b622b7c\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.277656 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") pod \"57ebee9b-653a-4d49-9002-23c81b622b7c\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.277697 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") pod \"57ebee9b-653a-4d49-9002-23c81b622b7c\" (UID: \"57ebee9b-653a-4d49-9002-23c81b622b7c\") " Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.283036 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq" (OuterVolumeSpecName: "kube-api-access-vdcvq") pod "57ebee9b-653a-4d49-9002-23c81b622b7c" (UID: "57ebee9b-653a-4d49-9002-23c81b622b7c"). InnerVolumeSpecName "kube-api-access-vdcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.306207 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data" (OuterVolumeSpecName: "config-data") pod "57ebee9b-653a-4d49-9002-23c81b622b7c" (UID: "57ebee9b-653a-4d49-9002-23c81b622b7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.309720 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57ebee9b-653a-4d49-9002-23c81b622b7c" (UID: "57ebee9b-653a-4d49-9002-23c81b622b7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.379866 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.380770 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57ebee9b-653a-4d49-9002-23c81b622b7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.380895 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdcvq\" (UniqueName: \"kubernetes.io/projected/57ebee9b-653a-4d49-9002-23c81b622b7c-kube-api-access-vdcvq\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.958245 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"57ebee9b-653a-4d49-9002-23c81b622b7c","Type":"ContainerDied","Data":"9ebe5c00f1a81b515c7ecc716c300b5811813ab974f7f4bd90b9fc00489cfc97"} Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.958570 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:02 crc kubenswrapper[4902]: I0121 16:12:02.958615 4902 scope.go:117] "RemoveContainer" containerID="31fae740e9f177c6066f6345aa9a2713697f906cf0f7e1329e3a321359e5144b" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.010443 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.020763 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.052751 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: E0121 16:12:03.053265 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerName="nova-manage" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053291 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerName="nova-manage" Jan 21 16:12:03 crc kubenswrapper[4902]: E0121 16:12:03.053353 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053363 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053593 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" containerName="nova-manage" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.053613 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" containerName="nova-scheduler-scheduler" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.054409 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.057035 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.070854 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.094968 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmhb\" (UniqueName: \"kubernetes.io/projected/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-kube-api-access-tlmhb\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.095057 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.095100 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-config-data\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.196936 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmhb\" (UniqueName: \"kubernetes.io/projected/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-kube-api-access-tlmhb\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.197017 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.197075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-config-data\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.203179 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.203338 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-config-data\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.216748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmhb\" (UniqueName: \"kubernetes.io/projected/6d12c9a0-2841-4a53-abd3-0cdb15d404fb-kube-api-access-tlmhb\") pod \"nova-scheduler-0\" (UID: \"6d12c9a0-2841-4a53-abd3-0cdb15d404fb\") " pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.329375 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.329517 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.377002 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.862523 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:12:03 crc kubenswrapper[4902]: W0121 16:12:03.867397 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d12c9a0_2841_4a53_abd3_0cdb15d404fb.slice/crio-6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7 WatchSource:0}: Error finding container 6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7: Status 404 returned error can't find the container with id 6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7 Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.882630 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.925207 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.970787 4902 generic.go:334] "Generic (PLEG): container finished" podID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" exitCode=0 Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.970876 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.970876 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerDied","Data":"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.972244 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05","Type":"ContainerDied","Data":"5dc878fe2810b860179baaee0d2ae9f7776b38daf321c444be0ed0a6d720d1e8"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.972267 4902 scope.go:117] "RemoveContainer" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977315 4902 generic.go:334] "Generic (PLEG): container finished" podID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" exitCode=0 Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977409 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerDied","Data":"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977437 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87","Type":"ContainerDied","Data":"869e26c7cdc408cc6d30fb73754154da4ed626aa2bcfbfb82320a1a50bc79cff"} Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.977440 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:03 crc kubenswrapper[4902]: I0121 16:12:03.999232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d12c9a0-2841-4a53-abd3-0cdb15d404fb","Type":"ContainerStarted","Data":"6b0058327c456ab7eb78a9420c03f22a3fd81f785cc01fb07fadf7771d6a7ff7"} Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.014939 4902 scope.go:117] "RemoveContainer" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029275 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029328 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029360 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029385 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029425 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029476 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029547 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") pod \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\" (UID: \"11cfdeec-5c4f-4051-8c8d-3c4c3e648e87\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.029591 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") pod \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\" (UID: \"8b3b10dc-7950-4a5c-a31d-3fc11ce4de05\") " Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.030394 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs" (OuterVolumeSpecName: "logs") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.030823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs" (OuterVolumeSpecName: "logs") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.034033 4902 scope.go:117] "RemoveContainer" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.034373 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8" (OuterVolumeSpecName: "kube-api-access-k9qv8") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "kube-api-access-k9qv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.034778 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth" (OuterVolumeSpecName: "kube-api-access-7kfth") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "kube-api-access-7kfth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.038576 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e\": container with ID starting with 42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e not found: ID does not exist" containerID="42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.038620 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e"} err="failed to get container status \"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e\": rpc error: code = NotFound desc = could not find container \"42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e\": container with ID starting with 42a9b364e527cd3f5c8f880c4803f97b2686e1199274ba0be96fa5bc32c4fa3e not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.038648 4902 scope.go:117] "RemoveContainer" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.039107 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f\": container with ID starting with 1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f not found: ID does not exist" containerID="1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.039152 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f"} err="failed to get container status \"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f\": rpc error: code = NotFound desc = could not find container \"1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f\": container with ID starting with 1a65355dca24eab6a15d8d64f9a5b0d57ea56e611ca55abb147d380b94780e2f not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.039180 4902 scope.go:117] "RemoveContainer" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.056067 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data" (OuterVolumeSpecName: "config-data") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.058828 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" (UID: "11cfdeec-5c4f-4051-8c8d-3c4c3e648e87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.061262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.062317 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data" (OuterVolumeSpecName: "config-data") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.069644 4902 scope.go:117] "RemoveContainer" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.079792 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" (UID: "8b3b10dc-7950-4a5c-a31d-3fc11ce4de05"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.089007 4902 scope.go:117] "RemoveContainer" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.089611 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692\": container with ID starting with 84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692 not found: ID does not exist" containerID="84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.089676 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692"} err="failed to get container status \"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692\": rpc error: code = NotFound desc = could not find container \"84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692\": container with ID starting with 84be47d7c73451e39699685b57d20f116337c8632af0327456ccf2bf9125b692 not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.089710 4902 scope.go:117] "RemoveContainer" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.090175 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681\": container with ID starting with 486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681 not found: ID does not exist" containerID="486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.090206 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681"} err="failed to get container status \"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681\": rpc error: code = NotFound desc = could not find container \"486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681\": container with ID starting with 486d09b790914e558f302eaaedb49ef7381e66e0acb64b9a526792326daf8681 not found: ID does not exist" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131673 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131706 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131716 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131725 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kfth\" (UniqueName: \"kubernetes.io/projected/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-kube-api-access-7kfth\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131734 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131742 4902 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9qv8\" (UniqueName: \"kubernetes.io/projected/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-kube-api-access-k9qv8\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131762 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.131770 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.309400 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ebee9b-653a-4d49-9002-23c81b622b7c" path="/var/lib/kubelet/pods/57ebee9b-653a-4d49-9002-23c81b622b7c/volumes" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.315247 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.325513 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.341084 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.364591 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390493 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390903 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390921 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390935 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390941 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390967 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390973 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" Jan 21 16:12:04 crc kubenswrapper[4902]: E0121 16:12:04.390985 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.390991 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391177 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-metadata" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391197 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391207 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" containerName="nova-api-api" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.391215 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" containerName="nova-metadata-log" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.392068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.413799 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.427657 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.436023 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.439565 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.439628 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.446423 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.458757 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539470 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539495 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539520 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539560 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whg8b\" (UniqueName: \"kubernetes.io/projected/98338524-801f-465f-8845-1d061027c735-kube-api-access-whg8b\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539623 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-config-data\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.539665 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98338524-801f-465f-8845-1d061027c735-logs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640903 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whg8b\" (UniqueName: \"kubernetes.io/projected/98338524-801f-465f-8845-1d061027c735-kube-api-access-whg8b\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640932 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-config-data\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.640982 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98338524-801f-465f-8845-1d061027c735-logs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641055 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641083 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641146 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.641734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.642166 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98338524-801f-465f-8845-1d061027c735-logs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.645655 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.645713 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.655009 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.656316 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98338524-801f-465f-8845-1d061027c735-config-data\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.656747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.662001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"nova-api-0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.669558 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whg8b\" (UniqueName: \"kubernetes.io/projected/98338524-801f-465f-8845-1d061027c735-kube-api-access-whg8b\") pod \"nova-metadata-0\" (UID: \"98338524-801f-465f-8845-1d061027c735\") " pod="openstack/nova-metadata-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.738911 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:04 crc kubenswrapper[4902]: I0121 16:12:04.756521 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.013883 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d12c9a0-2841-4a53-abd3-0cdb15d404fb","Type":"ContainerStarted","Data":"272796d652b0e210858235440b0694234f672ace0166d1702647a8424056a119"} Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.033090 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.033063874 podStartE2EDuration="2.033063874s" podCreationTimestamp="2026-01-21 16:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:05.028732502 +0000 UTC m=+5887.105565541" watchObservedRunningTime="2026-01-21 16:12:05.033063874 +0000 UTC m=+5887.109896903" Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.256952 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:12:05 crc kubenswrapper[4902]: I0121 16:12:05.273851 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:05 crc kubenswrapper[4902]: W0121 16:12:05.282566 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c018ac_4b51_418d_8410_6f3f6e84d0b0.slice/crio-060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea WatchSource:0}: Error finding container 060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea: Status 404 returned error can't find the container with id 060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.021943 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerStarted","Data":"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.023088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerStarted","Data":"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.023622 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerStarted","Data":"060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.025114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98338524-801f-465f-8845-1d061027c735","Type":"ContainerStarted","Data":"5d9233d0170bcce8ade2bfa80238657d2c9370b5536b8df7ceaca5a7602eba77"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.025156 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98338524-801f-465f-8845-1d061027c735","Type":"ContainerStarted","Data":"dd676ea54b6c53446b1c0a9f38edc82c24c523ff131724124eb75e6c356a88a6"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.025167 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98338524-801f-465f-8845-1d061027c735","Type":"ContainerStarted","Data":"32df1557c7fd931265e0a3f2671b981c2fb64d61b4d7cc70b5f3fc1120d35a11"} Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.044987 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.044966486 podStartE2EDuration="2.044966486s" podCreationTimestamp="2026-01-21 16:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:06.043567757 +0000 UTC m=+5888.120400796" watchObservedRunningTime="2026-01-21 16:12:06.044966486 +0000 UTC m=+5888.121799515" Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.072035 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.072012336 podStartE2EDuration="2.072012336s" podCreationTimestamp="2026-01-21 16:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:06.064287759 +0000 UTC m=+5888.141120828" watchObservedRunningTime="2026-01-21 16:12:06.072012336 +0000 UTC m=+5888.148845355" Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.312235 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11cfdeec-5c4f-4051-8c8d-3c4c3e648e87" path="/var/lib/kubelet/pods/11cfdeec-5c4f-4051-8c8d-3c4c3e648e87/volumes" Jan 21 16:12:06 crc kubenswrapper[4902]: I0121 16:12:06.312873 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3b10dc-7950-4a5c-a31d-3fc11ce4de05" path="/var/lib/kubelet/pods/8b3b10dc-7950-4a5c-a31d-3fc11ce4de05/volumes" Jan 21 16:12:08 crc kubenswrapper[4902]: I0121 16:12:08.378144 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:12:09 crc kubenswrapper[4902]: I0121 16:12:09.757376 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:12:09 crc kubenswrapper[4902]: I0121 16:12:09.757511 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:12:13 crc kubenswrapper[4902]: I0121 16:12:13.378009 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:12:13 crc kubenswrapper[4902]: I0121 16:12:13.403956 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.147918 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.739938 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.740002 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.757805 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:12:14 crc kubenswrapper[4902]: I0121 16:12:14.757887 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892266 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892357 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="98338524-801f-465f-8845-1d061027c735" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.87:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892473 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:15 crc kubenswrapper[4902]: I0121 16:12:15.892522 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="98338524-801f-465f-8845-1d061027c735" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.87:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.747711 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.748472 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.749278 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.749578 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.752419 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.753190 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.765315 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.774332 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:12:24 crc kubenswrapper[4902]: I0121 16:12:24.777347 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.001902 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.003578 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.029484 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074135 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074198 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074272 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074291 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.074308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176129 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176227 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176319 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176346 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.176372 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.177519 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.178032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.179410 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.181037 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.207721 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"dnsmasq-dns-5f7b5475f9-g5lzz\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.338655 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.352445 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:12:25 crc kubenswrapper[4902]: I0121 16:12:25.946078 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:12:26 crc kubenswrapper[4902]: I0121 16:12:26.316980 4902 generic.go:334] "Generic (PLEG): container finished" podID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerID="e76932770c6254b11b917bc645b83b0c1aaf28ee17d431c3d586506bef4ab067" exitCode=0 Jan 21 16:12:26 crc kubenswrapper[4902]: I0121 16:12:26.317071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerDied","Data":"e76932770c6254b11b917bc645b83b0c1aaf28ee17d431c3d586506bef4ab067"} Jan 21 16:12:26 crc kubenswrapper[4902]: I0121 16:12:26.317321 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerStarted","Data":"38677ca61f06b9260ed5f983f8682c334bd87743eff5be88bd87e6a5090aa3da"} Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.327779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerStarted","Data":"baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1"} Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.328117 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.346056 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" podStartSLOduration=3.346014241 podStartE2EDuration="3.346014241s" podCreationTimestamp="2026-01-21 16:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:27.343584272 +0000 UTC m=+5909.420417301" watchObservedRunningTime="2026-01-21 16:12:27.346014241 +0000 UTC m=+5909.422847280" Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.803548 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.804142 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" containerID="cri-o://e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" gracePeriod=30 Jan 21 16:12:27 crc kubenswrapper[4902]: I0121 16:12:27.804212 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" containerID="cri-o://eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" gracePeriod=30 Jan 21 16:12:28 crc kubenswrapper[4902]: I0121 16:12:28.386573 4902 generic.go:334] "Generic (PLEG): container finished" podID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" exitCode=143 Jan 21 16:12:28 crc kubenswrapper[4902]: I0121 16:12:28.387463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerDied","Data":"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58"} Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.387616 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418489 4902 generic.go:334] "Generic (PLEG): container finished" podID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" exitCode=0 Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418546 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerDied","Data":"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767"} Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07c018ac-4b51-418d-8410-6f3f6e84d0b0","Type":"ContainerDied","Data":"060c58d0d4654f115ba1bf8b0070a811e8ab56e3accae8cf59f2fe82e65ba5ea"} Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.418647 4902 scope.go:117] "RemoveContainer" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.448452 4902 scope.go:117] "RemoveContainer" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.473604 4902 scope.go:117] "RemoveContainer" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.477521 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767\": container with ID starting with e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767 not found: ID does not exist" containerID="e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.477579 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767"} err="failed to get container status \"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767\": rpc error: code = NotFound desc = could not find container \"e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767\": container with ID starting with e35ebe913073c7b2b301e08e6998346270250c7bcf57b23cdd6fa6308292d767 not found: ID does not exist" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.477615 4902 scope.go:117] "RemoveContainer" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.478162 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58\": container with ID starting with eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58 not found: ID does not exist" containerID="eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.478206 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58"} err="failed to get container status \"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58\": rpc error: code = NotFound desc = could not find container \"eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58\": container with ID starting with eb4d60279286f4dbbbf9a3fe7869af1e38823557f55acd269c526fe2dd6f3f58 not found: ID does not exist" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496621 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496789 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496874 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.496941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") pod \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\" (UID: \"07c018ac-4b51-418d-8410-6f3f6e84d0b0\") " Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.497593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs" (OuterVolumeSpecName: "logs") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.512241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48" (OuterVolumeSpecName: "kube-api-access-fzc48") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "kube-api-access-fzc48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.539661 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.564310 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data" (OuterVolumeSpecName: "config-data") pod "07c018ac-4b51-418d-8410-6f3f6e84d0b0" (UID: "07c018ac-4b51-418d-8410-6f3f6e84d0b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599303 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599437 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c018ac-4b51-418d-8410-6f3f6e84d0b0-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599529 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c018ac-4b51-418d-8410-6f3f6e84d0b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.599603 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzc48\" (UniqueName: \"kubernetes.io/projected/07c018ac-4b51-418d-8410-6f3f6e84d0b0-kube-api-access-fzc48\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.761769 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.773297 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789208 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.789654 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789679 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" Jan 21 16:12:31 crc kubenswrapper[4902]: E0121 16:12:31.789699 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789942 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-api" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.789969 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" containerName="nova-api-log" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.797399 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.802473 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.802688 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.802835 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.806010 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906424 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8603f024-f71f-486b-93aa-e6397021aa48-logs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkspl\" (UniqueName: \"kubernetes.io/projected/8603f024-f71f-486b-93aa-e6397021aa48-kube-api-access-tkspl\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906609 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906644 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-config-data\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906674 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-public-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:31 crc kubenswrapper[4902]: I0121 16:12:31.906786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008745 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkspl\" (UniqueName: \"kubernetes.io/projected/8603f024-f71f-486b-93aa-e6397021aa48-kube-api-access-tkspl\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008791 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-config-data\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008831 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-public-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008890 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.008956 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8603f024-f71f-486b-93aa-e6397021aa48-logs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.009417 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8603f024-f71f-486b-93aa-e6397021aa48-logs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.012899 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-config-data\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.013380 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.018565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.025565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8603f024-f71f-486b-93aa-e6397021aa48-public-tls-certs\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.032626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkspl\" (UniqueName: \"kubernetes.io/projected/8603f024-f71f-486b-93aa-e6397021aa48-kube-api-access-tkspl\") pod \"nova-api-0\" (UID: \"8603f024-f71f-486b-93aa-e6397021aa48\") " pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.117333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.306290 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c018ac-4b51-418d-8410-6f3f6e84d0b0" path="/var/lib/kubelet/pods/07c018ac-4b51-418d-8410-6f3f6e84d0b0/volumes" Jan 21 16:12:32 crc kubenswrapper[4902]: I0121 16:12:32.548103 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.438444 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8603f024-f71f-486b-93aa-e6397021aa48","Type":"ContainerStarted","Data":"060fa514553911bcf1a8ecf95920d87c8506c854b2448d57b1049b90596e1104"} Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.438760 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8603f024-f71f-486b-93aa-e6397021aa48","Type":"ContainerStarted","Data":"885e2c15d8f17437f5572b584efdda7193154e6764bc22c0f7d36b56f21e1062"} Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.438774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8603f024-f71f-486b-93aa-e6397021aa48","Type":"ContainerStarted","Data":"d288cd73ed0e00fc8ff66949a8c44c74e42cd9ed73e662d9d606ce587d26ae87"} Jan 21 16:12:33 crc kubenswrapper[4902]: I0121 16:12:33.472833 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.472807867 podStartE2EDuration="2.472807867s" podCreationTimestamp="2026-01-21 16:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:33.458079403 +0000 UTC m=+5915.534912432" watchObservedRunningTime="2026-01-21 16:12:33.472807867 +0000 UTC m=+5915.549640916" Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.340190 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.401355 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.401659 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" containerID="cri-o://c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" gracePeriod=10 Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.891668 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989723 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989887 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989915 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.989964 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:35 crc kubenswrapper[4902]: I0121 16:12:35.990079 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") pod \"b01a7675-d9b2-451e-8137-b069f892c1dd\" (UID: \"b01a7675-d9b2-451e-8137-b069f892c1dd\") " Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.049849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7" (OuterVolumeSpecName: "kube-api-access-nwjh7") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "kube-api-access-nwjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.095000 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwjh7\" (UniqueName: \"kubernetes.io/projected/b01a7675-d9b2-451e-8137-b069f892c1dd-kube-api-access-nwjh7\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.131746 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config" (OuterVolumeSpecName: "config") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.158502 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.162267 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.189649 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b01a7675-d9b2-451e-8137-b069f892c1dd" (UID: "b01a7675-d9b2-451e-8137-b069f892c1dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196806 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196842 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196851 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.196861 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a7675-d9b2-451e-8137-b069f892c1dd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466133 4902 generic.go:334] "Generic (PLEG): container finished" podID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" exitCode=0 Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466483 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerDied","Data":"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2"} Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" event={"ID":"b01a7675-d9b2-451e-8137-b069f892c1dd","Type":"ContainerDied","Data":"3e67c0e6a35688e67c9ae97a666562e7a06fee4e64265deb351ad6f1c7a1f81e"} Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466551 4902 scope.go:117] "RemoveContainer" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.466742 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f49c7d99-gbqjj" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.492698 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.503147 4902 scope.go:117] "RemoveContainer" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.507686 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66f49c7d99-gbqjj"] Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.565282 4902 scope.go:117] "RemoveContainer" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" Jan 21 16:12:36 crc kubenswrapper[4902]: E0121 16:12:36.565765 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2\": container with ID starting with c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2 not found: ID does not exist" containerID="c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.565813 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2"} err="failed to get container status \"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2\": rpc error: code = NotFound desc = could not find container \"c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2\": container with ID starting with c535f1758ee150f1992060bd3564b49e81eddcffa42728ec1ac130c0449051c2 not found: ID does not exist" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.565845 4902 scope.go:117] "RemoveContainer" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" Jan 21 16:12:36 crc kubenswrapper[4902]: E0121 16:12:36.566285 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4\": container with ID starting with 18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4 not found: ID does not exist" containerID="18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4" Jan 21 16:12:36 crc kubenswrapper[4902]: I0121 16:12:36.566318 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4"} err="failed to get container status \"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4\": rpc error: code = NotFound desc = could not find container \"18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4\": container with ID starting with 18ff4c16518d3370beabf9b4d3b009e9a9341f86cab33887ac50b593cc5cd8c4 not found: ID does not exist" Jan 21 16:12:38 crc kubenswrapper[4902]: I0121 16:12:38.305544 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" path="/var/lib/kubelet/pods/b01a7675-d9b2-451e-8137-b069f892c1dd/volumes" Jan 21 16:12:42 crc kubenswrapper[4902]: I0121 16:12:42.118541 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:42 crc kubenswrapper[4902]: I0121 16:12:42.119138 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:12:43 crc kubenswrapper[4902]: I0121 16:12:43.136253 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8603f024-f71f-486b-93aa-e6397021aa48" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.89:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:43 crc kubenswrapper[4902]: I0121 16:12:43.136278 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8603f024-f71f-486b-93aa-e6397021aa48" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.89:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.125768 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.126656 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.128614 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.134772 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.596174 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:12:52 crc kubenswrapper[4902]: I0121 16:12:52.603608 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.265364 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9bqlx"] Jan 21 16:13:15 crc kubenswrapper[4902]: E0121 16:13:15.267847 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="init" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.267967 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="init" Jan 21 16:13:15 crc kubenswrapper[4902]: E0121 16:13:15.268059 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.268165 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.268465 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01a7675-d9b2-451e-8137-b069f892c1dd" containerName="dnsmasq-dns" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.269373 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.271616 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-n9d6n" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.272196 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.272445 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.276947 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qfhz4"] Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.278880 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.289701 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx"] Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.300779 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qfhz4"] Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412504 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbhs\" (UniqueName: \"kubernetes.io/projected/cc475055-769c-4199-8486-3bdca7cd05bc-kube-api-access-pdbhs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412601 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412624 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d120f671-59d9-42ef-a905-2a6203c5896c-scripts\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.412651 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.413650 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-etc-ovs\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.413958 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc475055-769c-4199-8486-3bdca7cd05bc-scripts\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.413993 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-run\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414037 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp62m\" (UniqueName: \"kubernetes.io/projected/d120f671-59d9-42ef-a905-2a6203c5896c-kube-api-access-xp62m\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414067 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-log\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414093 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-ovn-controller-tls-certs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414115 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-lib\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414144 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-log-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.414180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-combined-ca-bundle\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516003 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-log-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516195 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-combined-ca-bundle\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516272 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbhs\" (UniqueName: \"kubernetes.io/projected/cc475055-769c-4199-8486-3bdca7cd05bc-kube-api-access-pdbhs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516359 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516390 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d120f671-59d9-42ef-a905-2a6203c5896c-scripts\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516405 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-log-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516433 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run-ovn\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516568 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-etc-ovs\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516608 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc475055-769c-4199-8486-3bdca7cd05bc-var-run\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516666 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc475055-769c-4199-8486-3bdca7cd05bc-scripts\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516722 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-run\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-etc-ovs\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516854 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-run\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516792 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp62m\" (UniqueName: \"kubernetes.io/projected/d120f671-59d9-42ef-a905-2a6203c5896c-kube-api-access-xp62m\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-log\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516944 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-ovn-controller-tls-certs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.516981 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-lib\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.517138 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-log\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.517163 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d120f671-59d9-42ef-a905-2a6203c5896c-var-lib\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.518597 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d120f671-59d9-42ef-a905-2a6203c5896c-scripts\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.518791 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc475055-769c-4199-8486-3bdca7cd05bc-scripts\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.526752 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-combined-ca-bundle\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.532366 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc475055-769c-4199-8486-3bdca7cd05bc-ovn-controller-tls-certs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.537795 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp62m\" (UniqueName: \"kubernetes.io/projected/d120f671-59d9-42ef-a905-2a6203c5896c-kube-api-access-xp62m\") pod \"ovn-controller-ovs-qfhz4\" (UID: \"d120f671-59d9-42ef-a905-2a6203c5896c\") " pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.538421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbhs\" (UniqueName: \"kubernetes.io/projected/cc475055-769c-4199-8486-3bdca7cd05bc-kube-api-access-pdbhs\") pod \"ovn-controller-9bqlx\" (UID: \"cc475055-769c-4199-8486-3bdca7cd05bc\") " pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.592518 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:15 crc kubenswrapper[4902]: I0121 16:13:15.606583 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.162925 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.597726 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qfhz4"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.826428 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerStarted","Data":"b26b283eb7d1e59a25a4f61b3ea06c7f221252f97d5d110a9c4dacf74d46eabd"} Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.827906 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx" event={"ID":"cc475055-769c-4199-8486-3bdca7cd05bc","Type":"ContainerStarted","Data":"e4ef1f4f28a63c1a02e263e246ae858214053fb560cc3ed694f2fb3240790bd4"} Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.827948 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx" event={"ID":"cc475055-769c-4199-8486-3bdca7cd05bc","Type":"ContainerStarted","Data":"e9101f8442efbbc5805c3ccdaf9d284fa3247eeb21cf4ae4e16db1b7420ea28b"} Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.828067 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.854649 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9vx8r"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.855771 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9bqlx" podStartSLOduration=1.855745984 podStartE2EDuration="1.855745984s" podCreationTimestamp="2026-01-21 16:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:16.853949914 +0000 UTC m=+5958.930782943" watchObservedRunningTime="2026-01-21 16:13:16.855745984 +0000 UTC m=+5958.932579013" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.857232 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.859942 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.875131 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9vx8r"] Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.957576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f209787-a9f8-41df-8298-79c1381eecbb-config\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.957638 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.957780 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovn-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.958023 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovs-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.958212 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vldw\" (UniqueName: \"kubernetes.io/projected/8f209787-a9f8-41df-8298-79c1381eecbb-kube-api-access-2vldw\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:16 crc kubenswrapper[4902]: I0121 16:13:16.958401 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-combined-ca-bundle\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060025 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovs-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060144 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vldw\" (UniqueName: \"kubernetes.io/projected/8f209787-a9f8-41df-8298-79c1381eecbb-kube-api-access-2vldw\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-combined-ca-bundle\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060291 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f209787-a9f8-41df-8298-79c1381eecbb-config\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060340 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060366 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovn-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.060972 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovn-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.061582 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8f209787-a9f8-41df-8298-79c1381eecbb-ovs-rundir\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.061698 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f209787-a9f8-41df-8298-79c1381eecbb-config\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.066292 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.076018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f209787-a9f8-41df-8298-79c1381eecbb-combined-ca-bundle\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.080066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vldw\" (UniqueName: \"kubernetes.io/projected/8f209787-a9f8-41df-8298-79c1381eecbb-kube-api-access-2vldw\") pod \"ovn-controller-metrics-9vx8r\" (UID: \"8f209787-a9f8-41df-8298-79c1381eecbb\") " pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.173833 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9vx8r" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.662756 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.664234 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.678672 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.710724 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9vx8r"] Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774208 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774259 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774393 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.774422 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.837262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9vx8r" event={"ID":"8f209787-a9f8-41df-8298-79c1381eecbb","Type":"ContainerStarted","Data":"c0412fb96fd0854ced90bef3895a5f41a5d7eb8d745c8feccf5b8535cd54d18f"} Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.839399 4902 generic.go:334] "Generic (PLEG): container finished" podID="d120f671-59d9-42ef-a905-2a6203c5896c" containerID="5b7f9af129d8661b92530cfc428a4f65d3db014e3bb5370887e02202ed36e6b3" exitCode=0 Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.839487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerDied","Data":"5b7f9af129d8661b92530cfc428a4f65d3db014e3bb5370887e02202ed36e6b3"} Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.876633 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.876688 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.877652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.893356 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"octavia-db-create-5zjhh\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:17 crc kubenswrapper[4902]: I0121 16:13:17.984167 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.452147 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.708208 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.712176 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.716578 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.725136 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.793884 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.794645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.852471 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerStarted","Data":"6f0918632c6894d0d7d441cff75d067d472926ab490d362382bcd7612118b755"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.852514 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfhz4" event={"ID":"d120f671-59d9-42ef-a905-2a6203c5896c","Type":"ContainerStarted","Data":"b9b6b6a800006933e8879773b81bc7e69ea4c88fbe18ccf5825157d325f62e6d"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.853742 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.853772 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.855079 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9vx8r" event={"ID":"8f209787-a9f8-41df-8298-79c1381eecbb","Type":"ContainerStarted","Data":"5a0698f4ddebf7c74aee9fede19c8ca29fc52747cac94bb9a373d7dbb4e29206"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.857191 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerStarted","Data":"5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.857223 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerStarted","Data":"5512edb3ab0e1de94b7daaa1b8379762d2fcf9c9f42594905428c6a97181ed95"} Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.884157 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qfhz4" podStartSLOduration=3.884138127 podStartE2EDuration="3.884138127s" podCreationTimestamp="2026-01-21 16:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:18.874124936 +0000 UTC m=+5960.950957965" watchObservedRunningTime="2026-01-21 16:13:18.884138127 +0000 UTC m=+5960.960971146" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.896252 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.896411 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.897253 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.897898 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-5zjhh" podStartSLOduration=1.8978594229999999 podStartE2EDuration="1.897859423s" podCreationTimestamp="2026-01-21 16:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:18.890234949 +0000 UTC m=+5960.967067988" watchObservedRunningTime="2026-01-21 16:13:18.897859423 +0000 UTC m=+5960.974692442" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.915544 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"octavia-ae8b-account-create-update-q86xl\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:18 crc kubenswrapper[4902]: I0121 16:13:18.919844 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9vx8r" podStartSLOduration=2.91981985 podStartE2EDuration="2.91981985s" podCreationTimestamp="2026-01-21 16:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:18.911159117 +0000 UTC m=+5960.987992166" watchObservedRunningTime="2026-01-21 16:13:18.91981985 +0000 UTC m=+5960.996652879" Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.048045 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.493917 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.866499 4902 generic.go:334] "Generic (PLEG): container finished" podID="507bf37f-b9da-4064-970b-89f9a27589fe" containerID="5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6" exitCode=0 Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.866574 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerDied","Data":"5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6"} Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.869396 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerStarted","Data":"bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f"} Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.869456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerStarted","Data":"571726af135d0494e6bf747d3ebd77dbc4c44f1575d7cc9867388c2db0a1ab73"} Jan 21 16:13:19 crc kubenswrapper[4902]: I0121 16:13:19.901563 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-ae8b-account-create-update-q86xl" podStartSLOduration=1.901544653 podStartE2EDuration="1.901544653s" podCreationTimestamp="2026-01-21 16:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:19.895433941 +0000 UTC m=+5961.972266970" watchObservedRunningTime="2026-01-21 16:13:19.901544653 +0000 UTC m=+5961.978377682" Jan 21 16:13:20 crc kubenswrapper[4902]: I0121 16:13:20.882680 4902 generic.go:334] "Generic (PLEG): container finished" podID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerID="bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f" exitCode=0 Jan 21 16:13:20 crc kubenswrapper[4902]: I0121 16:13:20.882812 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerDied","Data":"bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f"} Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.247957 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.365114 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") pod \"507bf37f-b9da-4064-970b-89f9a27589fe\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.365398 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") pod \"507bf37f-b9da-4064-970b-89f9a27589fe\" (UID: \"507bf37f-b9da-4064-970b-89f9a27589fe\") " Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.365980 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "507bf37f-b9da-4064-970b-89f9a27589fe" (UID: "507bf37f-b9da-4064-970b-89f9a27589fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.366639 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/507bf37f-b9da-4064-970b-89f9a27589fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.371575 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v" (OuterVolumeSpecName: "kube-api-access-5xb6v") pod "507bf37f-b9da-4064-970b-89f9a27589fe" (UID: "507bf37f-b9da-4064-970b-89f9a27589fe"). InnerVolumeSpecName "kube-api-access-5xb6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.469215 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xb6v\" (UniqueName: \"kubernetes.io/projected/507bf37f-b9da-4064-970b-89f9a27589fe-kube-api-access-5xb6v\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.894128 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5zjhh" event={"ID":"507bf37f-b9da-4064-970b-89f9a27589fe","Type":"ContainerDied","Data":"5512edb3ab0e1de94b7daaa1b8379762d2fcf9c9f42594905428c6a97181ed95"} Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.894189 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5512edb3ab0e1de94b7daaa1b8379762d2fcf9c9f42594905428c6a97181ed95" Jan 21 16:13:21 crc kubenswrapper[4902]: I0121 16:13:21.896035 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5zjhh" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.262369 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.387295 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") pod \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.387384 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") pod \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\" (UID: \"1baaefdd-ea47-4ac0-98d0-d370180b0eb0\") " Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.387899 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1baaefdd-ea47-4ac0-98d0-d370180b0eb0" (UID: "1baaefdd-ea47-4ac0-98d0-d370180b0eb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.393366 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd" (OuterVolumeSpecName: "kube-api-access-vzsxd") pod "1baaefdd-ea47-4ac0-98d0-d370180b0eb0" (UID: "1baaefdd-ea47-4ac0-98d0-d370180b0eb0"). InnerVolumeSpecName "kube-api-access-vzsxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.489467 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.489505 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzsxd\" (UniqueName: \"kubernetes.io/projected/1baaefdd-ea47-4ac0-98d0-d370180b0eb0-kube-api-access-vzsxd\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.904324 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ae8b-account-create-update-q86xl" event={"ID":"1baaefdd-ea47-4ac0-98d0-d370180b0eb0","Type":"ContainerDied","Data":"571726af135d0494e6bf747d3ebd77dbc4c44f1575d7cc9867388c2db0a1ab73"} Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.904368 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="571726af135d0494e6bf747d3ebd77dbc4c44f1575d7cc9867388c2db0a1ab73" Jan 21 16:13:22 crc kubenswrapper[4902]: I0121 16:13:22.904420 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ae8b-account-create-update-q86xl" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.403483 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:13:24 crc kubenswrapper[4902]: E0121 16:13:24.404120 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" containerName="mariadb-database-create" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404134 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" containerName="mariadb-database-create" Jan 21 16:13:24 crc kubenswrapper[4902]: E0121 16:13:24.404144 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerName="mariadb-account-create-update" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404150 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerName="mariadb-account-create-update" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404319 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" containerName="mariadb-database-create" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404344 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" containerName="mariadb-account-create-update" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.404985 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.419573 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.546502 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.547033 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.648595 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.648754 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.650020 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.666279 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"octavia-persistence-db-create-q8nvb\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:24 crc kubenswrapper[4902]: I0121 16:13:24.755321 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.235953 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.683589 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.685124 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.694331 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.697079 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.879114 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.879334 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.937550 4902 generic.go:334] "Generic (PLEG): container finished" podID="f4a4e549-a509-40db-8756-e37432024793" containerID="26b51b45f191ff662cf71fe75dfa0a28808489ff71c63772b28558abe727c5a5" exitCode=0 Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.937599 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q8nvb" event={"ID":"f4a4e549-a509-40db-8756-e37432024793","Type":"ContainerDied","Data":"26b51b45f191ff662cf71fe75dfa0a28808489ff71c63772b28558abe727c5a5"} Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.937632 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q8nvb" event={"ID":"f4a4e549-a509-40db-8756-e37432024793","Type":"ContainerStarted","Data":"8a42c0d43476f2283056f1b8cdd8149baed43b95cce73b87e2c9bcb4869745cd"} Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.981711 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.981887 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:25 crc kubenswrapper[4902]: I0121 16:13:25.982902 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.003253 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"octavia-d1a6-account-create-update-cw969\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.299998 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.768728 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:13:26 crc kubenswrapper[4902]: I0121 16:13:26.946656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d1a6-account-create-update-cw969" event={"ID":"502e21f3-ea57-4f04-8e23-9b45c7a07ca2","Type":"ContainerStarted","Data":"240574aacfc77d8f19c897fe77d38feb19c1b756f6a4d42a02116477198ec950"} Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.367999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.514297 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") pod \"f4a4e549-a509-40db-8756-e37432024793\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.514380 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") pod \"f4a4e549-a509-40db-8756-e37432024793\" (UID: \"f4a4e549-a509-40db-8756-e37432024793\") " Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.515182 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4a4e549-a509-40db-8756-e37432024793" (UID: "f4a4e549-a509-40db-8756-e37432024793"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.524824 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9" (OuterVolumeSpecName: "kube-api-access-v4tm9") pod "f4a4e549-a509-40db-8756-e37432024793" (UID: "f4a4e549-a509-40db-8756-e37432024793"). InnerVolumeSpecName "kube-api-access-v4tm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.618065 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4tm9\" (UniqueName: \"kubernetes.io/projected/f4a4e549-a509-40db-8756-e37432024793-kube-api-access-v4tm9\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.618427 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4a4e549-a509-40db-8756-e37432024793-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.956527 4902 generic.go:334] "Generic (PLEG): container finished" podID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerID="9ceea852acb3ca8b99175935197b72276107562be97cda3fb8e5495a3f66a192" exitCode=0 Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.956593 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d1a6-account-create-update-cw969" event={"ID":"502e21f3-ea57-4f04-8e23-9b45c7a07ca2","Type":"ContainerDied","Data":"9ceea852acb3ca8b99175935197b72276107562be97cda3fb8e5495a3f66a192"} Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.958233 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-q8nvb" event={"ID":"f4a4e549-a509-40db-8756-e37432024793","Type":"ContainerDied","Data":"8a42c0d43476f2283056f1b8cdd8149baed43b95cce73b87e2c9bcb4869745cd"} Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.958256 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a42c0d43476f2283056f1b8cdd8149baed43b95cce73b87e2c9bcb4869745cd" Jan 21 16:13:27 crc kubenswrapper[4902]: I0121 16:13:27.958300 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-q8nvb" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.315714 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.365840 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") pod \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.366010 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") pod \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\" (UID: \"502e21f3-ea57-4f04-8e23-9b45c7a07ca2\") " Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.366516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "502e21f3-ea57-4f04-8e23-9b45c7a07ca2" (UID: "502e21f3-ea57-4f04-8e23-9b45c7a07ca2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.372513 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n" (OuterVolumeSpecName: "kube-api-access-6629n") pod "502e21f3-ea57-4f04-8e23-9b45c7a07ca2" (UID: "502e21f3-ea57-4f04-8e23-9b45c7a07ca2"). InnerVolumeSpecName "kube-api-access-6629n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.467929 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6629n\" (UniqueName: \"kubernetes.io/projected/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-kube-api-access-6629n\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.467968 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502e21f3-ea57-4f04-8e23-9b45c7a07ca2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.976673 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d1a6-account-create-update-cw969" event={"ID":"502e21f3-ea57-4f04-8e23-9b45c7a07ca2","Type":"ContainerDied","Data":"240574aacfc77d8f19c897fe77d38feb19c1b756f6a4d42a02116477198ec950"} Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.976922 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240574aacfc77d8f19c897fe77d38feb19c1b756f6a4d42a02116477198ec950" Jan 21 16:13:29 crc kubenswrapper[4902]: I0121 16:13:29.976753 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d1a6-account-create-update-cw969" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.793465 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:13:31 crc kubenswrapper[4902]: E0121 16:13:31.794281 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerName="mariadb-account-create-update" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794299 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerName="mariadb-account-create-update" Jan 21 16:13:31 crc kubenswrapper[4902]: E0121 16:13:31.794317 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a4e549-a509-40db-8756-e37432024793" containerName="mariadb-database-create" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794325 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a4e549-a509-40db-8756-e37432024793" containerName="mariadb-database-create" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794568 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a4e549-a509-40db-8756-e37432024793" containerName="mariadb-database-create" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.794597 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" containerName="mariadb-account-create-update" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.796135 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.798576 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.802818 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.803423 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-d4crj" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.804606 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.804693 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.816943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817302 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817415 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.817471 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919227 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919299 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919895 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.919978 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920564 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920594 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920610 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.920882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.928813 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.928961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.929776 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:31 crc kubenswrapper[4902]: I0121 16:13:31.939231 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"octavia-api-68fdc4858c-f84fc\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:32 crc kubenswrapper[4902]: I0121 16:13:32.116095 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:32 crc kubenswrapper[4902]: I0121 16:13:32.649579 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:13:32 crc kubenswrapper[4902]: I0121 16:13:32.663881 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:13:33 crc kubenswrapper[4902]: I0121 16:13:33.007227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerStarted","Data":"74e10b4cb1523503d4eeb2bf1bd717e5c53fb870c6fa0cbb79018c89de00319b"} Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.065402 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.076439 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.097128 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fd43-account-create-update-f6bm7"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.105413 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9fbzk"] Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.305977 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9f6374-66c7-4124-b410-c5d60c8f0d6b" path="/var/lib/kubelet/pods/0b9f6374-66c7-4124-b410-c5d60c8f0d6b/volumes" Jan 21 16:13:44 crc kubenswrapper[4902]: I0121 16:13:44.306832 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3463ca-5f37-4a7e-9f53-c32f2abe3502" path="/var/lib/kubelet/pods/dd3463ca-5f37-4a7e-9f53-c32f2abe3502/volumes" Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.498559 4902 generic.go:334] "Generic (PLEG): container finished" podID="d51aa030-a37e-41cd-8552-491e33fe846f" containerID="fc991874ee6a71ec6a7ac8920dba0cf1f3cecb0c9f70e1aa730945d643c88576" exitCode=0 Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.498713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"fc991874ee6a71ec6a7ac8920dba0cf1f3cecb0c9f70e1aa730945d643c88576"} Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.770452 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:47 crc kubenswrapper[4902]: I0121 16:13:47.770509 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.509966 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerStarted","Data":"85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a"} Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.510299 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerStarted","Data":"5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e"} Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.510427 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:48 crc kubenswrapper[4902]: I0121 16:13:48.532879 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-68fdc4858c-f84fc" podStartSLOduration=3.493472542 podStartE2EDuration="17.532856911s" podCreationTimestamp="2026-01-21 16:13:31 +0000 UTC" firstStartedPulling="2026-01-21 16:13:32.66358222 +0000 UTC m=+5974.740415249" lastFinishedPulling="2026-01-21 16:13:46.702966589 +0000 UTC m=+5988.779799618" observedRunningTime="2026-01-21 16:13:48.525459433 +0000 UTC m=+5990.602292462" watchObservedRunningTime="2026-01-21 16:13:48.532856911 +0000 UTC m=+5990.609689940" Jan 21 16:13:49 crc kubenswrapper[4902]: I0121 16:13:49.520293 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.653601 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9bqlx" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.665768 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.672538 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qfhz4" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.800624 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.802173 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.805845 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.812129 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905435 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905776 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905813 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905903 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:50 crc kubenswrapper[4902]: I0121 16:13:50.905971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008243 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008308 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008337 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008411 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008440 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008467 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008804 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.008881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.009984 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.012184 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.035492 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.036208 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"ovn-controller-9bqlx-config-78jk5\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.043459 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-m6jz2"] Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.127519 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:51 crc kubenswrapper[4902]: I0121 16:13:51.763639 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:51 crc kubenswrapper[4902]: W0121 16:13:51.773894 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1616834_8fce_44d6_9551_52dc5e1012e4.slice/crio-b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46 WatchSource:0}: Error finding container b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46: Status 404 returned error can't find the container with id b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46 Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.308020 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="072d9d46-6930-490e-9561-cd7e75f05451" path="/var/lib/kubelet/pods/072d9d46-6930-490e-9561-cd7e75f05451/volumes" Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.553990 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerStarted","Data":"e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54"} Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.554077 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerStarted","Data":"b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46"} Jan 21 16:13:52 crc kubenswrapper[4902]: I0121 16:13:52.578551 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9bqlx-config-78jk5" podStartSLOduration=2.578530234 podStartE2EDuration="2.578530234s" podCreationTimestamp="2026-01-21 16:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:52.573124172 +0000 UTC m=+5994.649957211" watchObservedRunningTime="2026-01-21 16:13:52.578530234 +0000 UTC m=+5994.655363263" Jan 21 16:13:53 crc kubenswrapper[4902]: I0121 16:13:53.568122 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerID="e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54" exitCode=0 Jan 21 16:13:53 crc kubenswrapper[4902]: I0121 16:13:53.568232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerDied","Data":"e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54"} Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.022319 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193688 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193755 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193825 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193868 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.193908 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194015 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") pod \"e1616834-8fce-44d6-9551-52dc5e1012e4\" (UID: \"e1616834-8fce-44d6-9551-52dc5e1012e4\") " Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194205 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194240 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run" (OuterVolumeSpecName: "var-run") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194381 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194700 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.194992 4902 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195017 4902 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195033 4902 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195066 4902 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1616834-8fce-44d6-9551-52dc5e1012e4-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.195406 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts" (OuterVolumeSpecName: "scripts") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.200640 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz" (OuterVolumeSpecName: "kube-api-access-xnwkz") pod "e1616834-8fce-44d6-9551-52dc5e1012e4" (UID: "e1616834-8fce-44d6-9551-52dc5e1012e4"). InnerVolumeSpecName "kube-api-access-xnwkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.296293 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1616834-8fce-44d6-9551-52dc5e1012e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.296321 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwkz\" (UniqueName: \"kubernetes.io/projected/e1616834-8fce-44d6-9551-52dc5e1012e4-kube-api-access-xnwkz\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.590885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bqlx-config-78jk5" event={"ID":"e1616834-8fce-44d6-9551-52dc5e1012e4","Type":"ContainerDied","Data":"b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46"} Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.591593 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b861559f35c5cbc3ba73d83f74230652d5459d73a1da1c35233a69e6609f5e46" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.590912 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bqlx-config-78jk5" Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.671844 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:55 crc kubenswrapper[4902]: I0121 16:13:55.684686 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9bqlx-config-78jk5"] Jan 21 16:13:56 crc kubenswrapper[4902]: I0121 16:13:56.306213 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" path="/var/lib/kubelet/pods/e1616834-8fce-44d6-9551-52dc5e1012e4/volumes" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.666117 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-kn74s"] Jan 21 16:14:02 crc kubenswrapper[4902]: E0121 16:14:02.666835 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerName="ovn-config" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.666847 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerName="ovn-config" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.667020 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1616834-8fce-44d6-9551-52dc5e1012e4" containerName="ovn-config" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.667921 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.669972 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.670263 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.670610 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.687488 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-kn74s"] Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856566 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data-merged\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856750 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856790 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-hm-ports\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.856935 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-scripts\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958754 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958795 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-hm-ports\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-scripts\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.958930 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data-merged\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.959389 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data-merged\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.960150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-hm-ports\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.964681 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-config-data\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.974852 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/802fca2f-9dae-4f46-aaf3-c688c8ebbdfb-scripts\") pod \"octavia-rsyslog-kn74s\" (UID: \"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb\") " pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:02 crc kubenswrapper[4902]: I0121 16:14:02.986658 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.622296 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-kn74s"] Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.715462 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.717405 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.725719 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.727182 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerStarted","Data":"f20088af3212fe9054842943e360bd5030a5079c52ee84fd40b8af92ab57aacf"} Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.733723 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.797610 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.797733 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.899489 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.899637 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.900073 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:03 crc kubenswrapper[4902]: I0121 16:14:03.907001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-zmb5b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.059333 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.642161 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.644277 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.646575 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.674679 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.674980 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.675108 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.675214 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.675308 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.715615 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.754321 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerStarted","Data":"b6f108856fdfaa3ddd29d66fca750e5174c3b7beab2b9dd46ad0290fb86745d4"} Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777471 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777533 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.777613 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.779346 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.783768 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.784456 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.799970 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"octavia-db-sync-vrr2k\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:04 crc kubenswrapper[4902]: I0121 16:14:04.982519 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:05 crc kubenswrapper[4902]: I0121 16:14:05.037451 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:14:05 crc kubenswrapper[4902]: I0121 16:14:05.046256 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-clvkp"] Jan 21 16:14:05 crc kubenswrapper[4902]: I0121 16:14:05.836380 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:14:05 crc kubenswrapper[4902]: W0121 16:14:05.847617 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60a6ab47_0bbe_428a_82f5_478fc4c52e8a.slice/crio-bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366 WatchSource:0}: Error finding container bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366: Status 404 returned error can't find the container with id bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366 Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.308100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663c22ab-26c3-4d29-8965-255dc095eef2" path="/var/lib/kubelet/pods/663c22ab-26c3-4d29-8965-255dc095eef2/volumes" Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.793350 4902 generic.go:334] "Generic (PLEG): container finished" podID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerID="abc9a540052a00b1952e4ccbff28d0fd5e66b03f552886a2028474527bd5343e" exitCode=0 Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.793442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerDied","Data":"abc9a540052a00b1952e4ccbff28d0fd5e66b03f552886a2028474527bd5343e"} Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.793496 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerStarted","Data":"bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366"} Jan 21 16:14:06 crc kubenswrapper[4902]: I0121 16:14:06.799467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerStarted","Data":"125ce162fb1d06f6ce0ad368bac748d97ad4258b813971480c2bc437990b851b"} Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.326782 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.391744 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.810298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerStarted","Data":"f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6"} Jan 21 16:14:07 crc kubenswrapper[4902]: I0121 16:14:07.828998 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-vrr2k" podStartSLOduration=3.828981834 podStartE2EDuration="3.828981834s" podCreationTimestamp="2026-01-21 16:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:14:07.827915654 +0000 UTC m=+6009.904748693" watchObservedRunningTime="2026-01-21 16:14:07.828981834 +0000 UTC m=+6009.905814863" Jan 21 16:14:08 crc kubenswrapper[4902]: I0121 16:14:08.822564 4902 generic.go:334] "Generic (PLEG): container finished" podID="802fca2f-9dae-4f46-aaf3-c688c8ebbdfb" containerID="125ce162fb1d06f6ce0ad368bac748d97ad4258b813971480c2bc437990b851b" exitCode=0 Jan 21 16:14:08 crc kubenswrapper[4902]: I0121 16:14:08.822665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerDied","Data":"125ce162fb1d06f6ce0ad368bac748d97ad4258b813971480c2bc437990b851b"} Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.839526 4902 generic.go:334] "Generic (PLEG): container finished" podID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerID="f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6" exitCode=0 Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.840911 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerDied","Data":"f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6"} Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.843697 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-kn74s" event={"ID":"802fca2f-9dae-4f46-aaf3-c688c8ebbdfb","Type":"ContainerStarted","Data":"8b9722f803eddd28e0f8df0cf26577d7cd559dd8516a1f4276562320f20d3d16"} Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.844742 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:10 crc kubenswrapper[4902]: I0121 16:14:10.882238 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-kn74s" podStartSLOduration=2.6046143649999998 podStartE2EDuration="8.882221652s" podCreationTimestamp="2026-01-21 16:14:02 +0000 UTC" firstStartedPulling="2026-01-21 16:14:03.647101182 +0000 UTC m=+6005.723934211" lastFinishedPulling="2026-01-21 16:14:09.924708469 +0000 UTC m=+6012.001541498" observedRunningTime="2026-01-21 16:14:10.877954282 +0000 UTC m=+6012.954787321" watchObservedRunningTime="2026-01-21 16:14:10.882221652 +0000 UTC m=+6012.959054681" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.224813 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.329815 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.329973 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.330073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.330244 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") pod \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\" (UID: \"60a6ab47-0bbe-428a-82f5-478fc4c52e8a\") " Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.336805 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data" (OuterVolumeSpecName: "config-data") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.337184 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts" (OuterVolumeSpecName: "scripts") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.358639 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.358939 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60a6ab47-0bbe-428a-82f5-478fc4c52e8a" (UID: "60a6ab47-0bbe-428a-82f5-478fc4c52e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432639 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432675 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432685 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.432694 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a6ab47-0bbe-428a-82f5-478fc4c52e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.870399 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-vrr2k" event={"ID":"60a6ab47-0bbe-428a-82f5-478fc4c52e8a","Type":"ContainerDied","Data":"bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366"} Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.870498 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2c17c7cde44a49901c354fc86f24095282904510ad331016a89de7e5e63366" Jan 21 16:14:12 crc kubenswrapper[4902]: I0121 16:14:12.870501 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-vrr2k" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503256 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-77584c4dc-lmbjv"] Jan 21 16:14:13 crc kubenswrapper[4902]: E0121 16:14:13.503653 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="init" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503665 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="init" Jan 21 16:14:13 crc kubenswrapper[4902]: E0121 16:14:13.503684 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="octavia-db-sync" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503689 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="octavia-db-sync" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.503875 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" containerName="octavia-db-sync" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.505257 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.511670 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.511791 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.527200 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-77584c4dc-lmbjv"] Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.556574 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data-merged\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.556865 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-ovndb-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-octavia-run\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557235 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557332 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-public-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557440 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-internal-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557461 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-scripts\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.557625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-combined-ca-bundle\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659305 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-internal-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659349 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-scripts\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-combined-ca-bundle\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659466 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data-merged\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659491 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-ovndb-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659545 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-octavia-run\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659564 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.659590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-public-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.660433 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data-merged\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.660753 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/441cf475-eec9-4cee-84ab-7807e9ab0b75-octavia-run\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.664537 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-public-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.664865 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-config-data\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.664957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-combined-ca-bundle\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.665226 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-internal-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.666373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-ovndb-tls-certs\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.677023 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/441cf475-eec9-4cee-84ab-7807e9ab0b75-scripts\") pod \"octavia-api-77584c4dc-lmbjv\" (UID: \"441cf475-eec9-4cee-84ab-7807e9ab0b75\") " pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:13 crc kubenswrapper[4902]: I0121 16:14:13.821720 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.275430 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-77584c4dc-lmbjv"] Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.889335 4902 generic.go:334] "Generic (PLEG): container finished" podID="441cf475-eec9-4cee-84ab-7807e9ab0b75" containerID="ea2fe82cde03c6a78f9d553b103a1c371e701ec1b79c475d35f8a86034f94021" exitCode=0 Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.889387 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerDied","Data":"ea2fe82cde03c6a78f9d553b103a1c371e701ec1b79c475d35f8a86034f94021"} Jan 21 16:14:14 crc kubenswrapper[4902]: I0121 16:14:14.889648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerStarted","Data":"a793d9ac345fa82d74e2acc0874f3a61d56ae2812774c4b1aaeb14624d98850c"} Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.129931 4902 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/2d/2d21773ac1b6ba8fb68f6ae19d75d7f308e3df9e2075fcfc8572117006de3334?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260121T161405Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=1a3ed4259338db31ffcff2399d8d4d8b6acc448f04f0502e93c639ecb435e60f®ion=us-east-1&namespace=gthiemonge&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=octavia-amphora-image&akamai_signature=exp=1769012945~hmac=9ea07ec601ef21c82c3e7bac9e3d6ff616e5094816368e2302f9299e27c52ce9\": net/http: TLS handshake timeout" image="quay.io/gthiemonge/octavia-amphora-image:latest" Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.130359 4902 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-7b97d6bc64-zmb5b_openstack(e68cbe1e-2ace-4011-856c-5fa393f45b4b): ErrImagePull: parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/2d/2d21773ac1b6ba8fb68f6ae19d75d7f308e3df9e2075fcfc8572117006de3334?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260121T161405Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=1a3ed4259338db31ffcff2399d8d4d8b6acc448f04f0502e93c639ecb435e60f®ion=us-east-1&namespace=gthiemonge&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=octavia-amphora-image&akamai_signature=exp=1769012945~hmac=9ea07ec601ef21c82c3e7bac9e3d6ff616e5094816368e2302f9299e27c52ce9\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.131548 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"parsing image configuration: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/2d/2d21773ac1b6ba8fb68f6ae19d75d7f308e3df9e2075fcfc8572117006de3334?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20260121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260121T161405Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=1a3ed4259338db31ffcff2399d8d4d8b6acc448f04f0502e93c639ecb435e60f®ion=us-east-1&namespace=gthiemonge&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=octavia-amphora-image&akamai_signature=exp=1769012945~hmac=9ea07ec601ef21c82c3e7bac9e3d6ff616e5094816368e2302f9299e27c52ce9\\\": net/http: TLS handshake timeout\"" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900715 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerStarted","Data":"e93cb80329800d15e2b5ce77b2dc61ef058262b83f15dc3c21436462abf4dfe3"} Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900780 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-77584c4dc-lmbjv" event={"ID":"441cf475-eec9-4cee-84ab-7807e9ab0b75","Type":"ContainerStarted","Data":"56e119d97ab6875ebdbdb5528d15ca45102e19220044055c1bb7e639486d7373"} Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900805 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.900825 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:15 crc kubenswrapper[4902]: E0121 16:14:15.902698 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" Jan 21 16:14:15 crc kubenswrapper[4902]: I0121 16:14:15.933254 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-77584c4dc-lmbjv" podStartSLOduration=2.933223673 podStartE2EDuration="2.933223673s" podCreationTimestamp="2026-01-21 16:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:14:15.926674869 +0000 UTC m=+6018.003507908" watchObservedRunningTime="2026-01-21 16:14:15.933223673 +0000 UTC m=+6018.010056722" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.769580 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.769855 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.769912 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.771467 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.771700 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" gracePeriod=600 Jan 21 16:14:17 crc kubenswrapper[4902]: E0121 16:14:17.911961 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.920951 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" exitCode=0 Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.920992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac"} Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.921022 4902 scope.go:117] "RemoveContainer" containerID="db5e286ed12d5cdac8541e22aa5c6794629a15f27a4e802d85c369fc2b4f4f6b" Jan 21 16:14:17 crc kubenswrapper[4902]: I0121 16:14:17.921598 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:17 crc kubenswrapper[4902]: E0121 16:14:17.921825 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:18 crc kubenswrapper[4902]: I0121 16:14:18.032098 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-kn74s" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.311518 4902 scope.go:117] "RemoveContainer" containerID="c898501a393ec12d8bdad3ffbecedd820d45983cad8f57e77c1b8bf1f2602ced" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.860086 4902 scope.go:117] "RemoveContainer" containerID="94e5637468147f71d442912ca57ee6a969ce1c74828b8408d61b57b6d26eda33" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.883526 4902 scope.go:117] "RemoveContainer" containerID="994f6f05fed4b0e62e48fa8578c2ecb21f387018408d5954555b07ebf19b3b49" Jan 21 16:14:25 crc kubenswrapper[4902]: I0121 16:14:25.932623 4902 scope.go:117] "RemoveContainer" containerID="8e7b81ffed093606aaee9fbef35f94103abd1548cced4aa289004fb371568398" Jan 21 16:14:29 crc kubenswrapper[4902]: I0121 16:14:29.294976 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:29 crc kubenswrapper[4902]: E0121 16:14:29.297033 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:32 crc kubenswrapper[4902]: I0121 16:14:32.938842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.072331 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-77584c4dc-lmbjv" Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.152066 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.152366 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-68fdc4858c-f84fc" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" containerID="cri-o://5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e" gracePeriod=30 Jan 21 16:14:33 crc kubenswrapper[4902]: I0121 16:14:33.152865 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-68fdc4858c-f84fc" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" containerID="cri-o://85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a" gracePeriod=30 Jan 21 16:14:34 crc kubenswrapper[4902]: I0121 16:14:34.133756 4902 generic.go:334] "Generic (PLEG): container finished" podID="d51aa030-a37e-41cd-8552-491e33fe846f" containerID="85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a" exitCode=0 Jan 21 16:14:34 crc kubenswrapper[4902]: I0121 16:14:34.133800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a"} Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.167295 4902 generic.go:334] "Generic (PLEG): container finished" podID="d51aa030-a37e-41cd-8552-491e33fe846f" containerID="5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e" exitCode=0 Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.167369 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e"} Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.445246 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562547 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562613 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562728 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562781 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562801 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.562827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") pod \"d51aa030-a37e-41cd-8552-491e33fe846f\" (UID: \"d51aa030-a37e-41cd-8552-491e33fe846f\") " Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.563329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.567849 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data" (OuterVolumeSpecName: "config-data") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.576827 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts" (OuterVolumeSpecName: "scripts") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.618980 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.625331 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.665999 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666090 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666103 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666115 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.666126 4902 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d51aa030-a37e-41cd-8552-491e33fe846f-octavia-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.710330 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d51aa030-a37e-41cd-8552-491e33fe846f" (UID: "d51aa030-a37e-41cd-8552-491e33fe846f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:37 crc kubenswrapper[4902]: I0121 16:14:37.770250 4902 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51aa030-a37e-41cd-8552-491e33fe846f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.182297 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-68fdc4858c-f84fc" event={"ID":"d51aa030-a37e-41cd-8552-491e33fe846f","Type":"ContainerDied","Data":"74e10b4cb1523503d4eeb2bf1bd717e5c53fb870c6fa0cbb79018c89de00319b"} Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.182363 4902 scope.go:117] "RemoveContainer" containerID="85be75f32741c5a6c474bfc60d2bfd7e17468dd268de43bcd1ea2514c5405c0a" Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.182411 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-68fdc4858c-f84fc" Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.227242 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.236605 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-68fdc4858c-f84fc"] Jan 21 16:14:38 crc kubenswrapper[4902]: I0121 16:14:38.310871 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" path="/var/lib/kubelet/pods/d51aa030-a37e-41cd-8552-491e33fe846f/volumes" Jan 21 16:14:40 crc kubenswrapper[4902]: I0121 16:14:40.295570 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:40 crc kubenswrapper[4902]: E0121 16:14:40.296094 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:14:44 crc kubenswrapper[4902]: I0121 16:14:44.955526 4902 scope.go:117] "RemoveContainer" containerID="5305efd4c47cb14e61d707c3da54bab682028a330018a10560da985768cafc0e" Jan 21 16:14:45 crc kubenswrapper[4902]: I0121 16:14:45.225749 4902 scope.go:117] "RemoveContainer" containerID="fc991874ee6a71ec6a7ac8920dba0cf1f3cecb0c9f70e1aa730945d643c88576" Jan 21 16:14:46 crc kubenswrapper[4902]: I0121 16:14:46.266210 4902 generic.go:334] "Generic (PLEG): container finished" podID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerID="69bb7c1dbb251c3de8ae57f07e730117aca9b6c44df49c90c6093a8ecc70f9e1" exitCode=0 Jan 21 16:14:46 crc kubenswrapper[4902]: I0121 16:14:46.266317 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerDied","Data":"69bb7c1dbb251c3de8ae57f07e730117aca9b6c44df49c90c6093a8ecc70f9e1"} Jan 21 16:14:47 crc kubenswrapper[4902]: I0121 16:14:47.282195 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerStarted","Data":"657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099"} Jan 21 16:14:47 crc kubenswrapper[4902]: I0121 16:14:47.312718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podStartSLOduration=3.605133227 podStartE2EDuration="44.312694518s" podCreationTimestamp="2026-01-21 16:14:03 +0000 UTC" firstStartedPulling="2026-01-21 16:14:04.698937007 +0000 UTC m=+6006.775770036" lastFinishedPulling="2026-01-21 16:14:45.406498288 +0000 UTC m=+6047.483331327" observedRunningTime="2026-01-21 16:14:47.29392989 +0000 UTC m=+6049.370762919" watchObservedRunningTime="2026-01-21 16:14:47.312694518 +0000 UTC m=+6049.389527547" Jan 21 16:14:54 crc kubenswrapper[4902]: I0121 16:14:54.296018 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:14:54 crc kubenswrapper[4902]: E0121 16:14:54.296812 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.140622 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 16:15:00 crc kubenswrapper[4902]: E0121 16:15:00.141422 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141435 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" Jan 21 16:15:00 crc kubenswrapper[4902]: E0121 16:15:00.141464 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="init" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141470 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="init" Jan 21 16:15:00 crc kubenswrapper[4902]: E0121 16:15:00.141487 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141494 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141669 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api-provider-agent" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.141696 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51aa030-a37e-41cd-8552-491e33fe846f" containerName="octavia-api" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.143000 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.145793 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.147481 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.154184 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.307538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.307734 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.307788 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.415149 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.415342 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.415373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.626778 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.638132 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.638485 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"collect-profiles-29483535-pvcf2\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:00 crc kubenswrapper[4902]: I0121 16:15:00.927288 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:01 crc kubenswrapper[4902]: I0121 16:15:01.456139 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 16:15:01 crc kubenswrapper[4902]: W0121 16:15:01.463423 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3234509_8b7b_4b77_9a80_f496d21a727e.slice/crio-e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129 WatchSource:0}: Error finding container e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129: Status 404 returned error can't find the container with id e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129 Jan 21 16:15:02 crc kubenswrapper[4902]: I0121 16:15:02.503608 4902 generic.go:334] "Generic (PLEG): container finished" podID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerID="4e8300ed14fa669d6234d502917b52e699b6641dda6ef60268cdbc2afafd8313" exitCode=0 Jan 21 16:15:02 crc kubenswrapper[4902]: I0121 16:15:02.503681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" event={"ID":"c3234509-8b7b-4b77-9a80-f496d21a727e","Type":"ContainerDied","Data":"4e8300ed14fa669d6234d502917b52e699b6641dda6ef60268cdbc2afafd8313"} Jan 21 16:15:02 crc kubenswrapper[4902]: I0121 16:15:02.504198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" event={"ID":"c3234509-8b7b-4b77-9a80-f496d21a727e","Type":"ContainerStarted","Data":"e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129"} Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.912436 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913333 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") pod \"c3234509-8b7b-4b77-9a80-f496d21a727e\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913409 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") pod \"c3234509-8b7b-4b77-9a80-f496d21a727e\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913448 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") pod \"c3234509-8b7b-4b77-9a80-f496d21a727e\" (UID: \"c3234509-8b7b-4b77-9a80-f496d21a727e\") " Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.913899 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3234509-8b7b-4b77-9a80-f496d21a727e" (UID: "c3234509-8b7b-4b77-9a80-f496d21a727e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.918855 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3234509-8b7b-4b77-9a80-f496d21a727e" (UID: "c3234509-8b7b-4b77-9a80-f496d21a727e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4902]: I0121 16:15:03.920407 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56" (OuterVolumeSpecName: "kube-api-access-72h56") pod "c3234509-8b7b-4b77-9a80-f496d21a727e" (UID: "c3234509-8b7b-4b77-9a80-f496d21a727e"). InnerVolumeSpecName "kube-api-access-72h56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.019133 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3234509-8b7b-4b77-9a80-f496d21a727e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.019177 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72h56\" (UniqueName: \"kubernetes.io/projected/c3234509-8b7b-4b77-9a80-f496d21a727e-kube-api-access-72h56\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.019190 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3234509-8b7b-4b77-9a80-f496d21a727e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.523815 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" event={"ID":"c3234509-8b7b-4b77-9a80-f496d21a727e","Type":"ContainerDied","Data":"e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129"} Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.524449 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9294568f3d6a90a9c619766f94d50453d679a9353790389fd2fc985e2041129" Jan 21 16:15:04 crc kubenswrapper[4902]: I0121 16:15:04.524059 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2" Jan 21 16:15:05 crc kubenswrapper[4902]: I0121 16:15:05.003240 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 16:15:05 crc kubenswrapper[4902]: I0121 16:15:05.010969 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-b6ktg"] Jan 21 16:15:05 crc kubenswrapper[4902]: I0121 16:15:05.295386 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:05 crc kubenswrapper[4902]: E0121 16:15:05.295643 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:06 crc kubenswrapper[4902]: I0121 16:15:06.305889 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93c6a82-9651-4ed2-a941-9414d9aff62c" path="/var/lib/kubelet/pods/e93c6a82-9651-4ed2-a941-9414d9aff62c/volumes" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.005991 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-vtnkx"] Jan 21 16:15:07 crc kubenswrapper[4902]: E0121 16:15:07.006562 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerName="collect-profiles" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.006585 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerName="collect-profiles" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.006851 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" containerName="collect-profiles" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.008307 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.011207 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.012010 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.016176 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.021991 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-vtnkx"] Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151636 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-amphora-certs\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-scripts\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151761 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151786 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-hm-ports\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151913 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data-merged\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.151946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-combined-ca-bundle\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.253794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data-merged\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.253872 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-combined-ca-bundle\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254021 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-amphora-certs\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-scripts\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254186 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.254219 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-hm-ports\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.255174 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data-merged\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.256063 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-hm-ports\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.260699 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-combined-ca-bundle\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.260840 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-amphora-certs\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.261313 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-scripts\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.262699 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39-config-data\") pod \"octavia-healthmanager-vtnkx\" (UID: \"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39\") " pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:07 crc kubenswrapper[4902]: I0121 16:15:07.336253 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:08 crc kubenswrapper[4902]: I0121 16:15:08.062751 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-vtnkx"] Jan 21 16:15:08 crc kubenswrapper[4902]: I0121 16:15:08.603444 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerStarted","Data":"c224aba44857e5091c950c0f81b22a1aa27e8329445ae3f4c7ba128689b62bc9"} Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.194413 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-pr9tl"] Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.196555 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.198498 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.200160 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.205250 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-pr9tl"] Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324008 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-amphora-certs\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324065 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-combined-ca-bundle\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324173 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data-merged\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324335 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-scripts\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324430 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-hm-ports\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.324491 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.425959 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-scripts\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.426013 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-hm-ports\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427229 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-hm-ports\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-amphora-certs\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427779 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-combined-ca-bundle\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.427853 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data-merged\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.428818 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data-merged\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.432212 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-scripts\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.433072 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-config-data\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.433449 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-amphora-certs\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.450832 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cb5d58-0b3f-40eb-a5ee-b8ab812c8008-combined-ca-bundle\") pod \"octavia-housekeeping-pr9tl\" (UID: \"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008\") " pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.530471 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:09 crc kubenswrapper[4902]: I0121 16:15:09.635759 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerStarted","Data":"935f678721148c181fa92b0136c39b6faa3d7b6fbdecfa689e4fa25697e2e514"} Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.203022 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-pr9tl"] Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.649320 4902 generic.go:334] "Generic (PLEG): container finished" podID="e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39" containerID="935f678721148c181fa92b0136c39b6faa3d7b6fbdecfa689e4fa25697e2e514" exitCode=0 Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.649416 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerDied","Data":"935f678721148c181fa92b0136c39b6faa3d7b6fbdecfa689e4fa25697e2e514"} Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.654859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerStarted","Data":"334f3c92e4efbdc9609c4ad989fc4a2ffdc1a2767b989a8f926ee900339fdc9d"} Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.912375 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.912635 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" containerID="cri-o://657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099" gracePeriod=30 Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.926677 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-drv9p"] Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.929727 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.931805 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.932183 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 21 16:15:10 crc kubenswrapper[4902]: I0121 16:15:10.948820 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-drv9p"] Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.066667 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/646b20f3-5a05-4352-9645-69bed7f67dae-config-data-merged\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067381 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-scripts\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067460 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-config-data\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067498 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-combined-ca-bundle\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067578 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-amphora-certs\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.067722 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/646b20f3-5a05-4352-9645-69bed7f67dae-hm-ports\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169072 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/646b20f3-5a05-4352-9645-69bed7f67dae-hm-ports\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169204 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/646b20f3-5a05-4352-9645-69bed7f67dae-config-data-merged\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169236 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-scripts\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169278 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-config-data\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169303 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-combined-ca-bundle\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-amphora-certs\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.169702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/646b20f3-5a05-4352-9645-69bed7f67dae-config-data-merged\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.171359 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/646b20f3-5a05-4352-9645-69bed7f67dae-hm-ports\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.174406 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-amphora-certs\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.175364 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-scripts\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.177438 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-combined-ca-bundle\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.178198 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b20f3-5a05-4352-9645-69bed7f67dae-config-data\") pod \"octavia-worker-drv9p\" (UID: \"646b20f3-5a05-4352-9645-69bed7f67dae\") " pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.257199 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.664027 4902 generic.go:334] "Generic (PLEG): container finished" podID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerID="657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099" exitCode=0 Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.664281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerDied","Data":"657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099"} Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.666181 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-vtnkx" event={"ID":"e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39","Type":"ContainerStarted","Data":"a4f4237321451e31ad7db921d31a338e857137025761b91a07f74b4fefb71d19"} Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.666357 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.686875 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-vtnkx" podStartSLOduration=5.686849886 podStartE2EDuration="5.686849886s" podCreationTimestamp="2026-01-21 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:15:11.680443116 +0000 UTC m=+6073.757276155" watchObservedRunningTime="2026-01-21 16:15:11.686849886 +0000 UTC m=+6073.763682915" Jan 21 16:15:11 crc kubenswrapper[4902]: I0121 16:15:11.954400 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.090857 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") pod \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.091102 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") pod \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\" (UID: \"e68cbe1e-2ace-4011-856c-5fa393f45b4b\") " Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.130524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e68cbe1e-2ace-4011-856c-5fa393f45b4b" (UID: "e68cbe1e-2ace-4011-856c-5fa393f45b4b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.191331 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "e68cbe1e-2ace-4011-856c-5fa393f45b4b" (UID: "e68cbe1e-2ace-4011-856c-5fa393f45b4b"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.194183 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e68cbe1e-2ace-4011-856c-5fa393f45b4b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.194215 4902 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/e68cbe1e-2ace-4011-856c-5fa393f45b4b-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.354156 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-drv9p"] Jan 21 16:15:12 crc kubenswrapper[4902]: E0121 16:15:12.363519 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68cbe1e_2ace_4011_856c_5fa393f45b4b.slice/crio-b6f108856fdfaa3ddd29d66fca750e5174c3b7beab2b9dd46ad0290fb86745d4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68cbe1e_2ace_4011_856c_5fa393f45b4b.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.676404 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.676396 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-zmb5b" event={"ID":"e68cbe1e-2ace-4011-856c-5fa393f45b4b","Type":"ContainerDied","Data":"b6f108856fdfaa3ddd29d66fca750e5174c3b7beab2b9dd46ad0290fb86745d4"} Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.676796 4902 scope.go:117] "RemoveContainer" containerID="657dc03492b59a1f33c2bcfda57969b4fd0b8668563d2f702e94ee1a2f5b2099" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.680307 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerStarted","Data":"dd237234903b1e90630066f8f3b5624819590b40362e32208301679b925d26d9"} Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.686776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerStarted","Data":"88e41ff5dbb6abfe33e302ae3175cef222d065b428b1717c6bc5037869bd2e1b"} Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.703035 4902 scope.go:117] "RemoveContainer" containerID="69bb7c1dbb251c3de8ae57f07e730117aca9b6c44df49c90c6093a8ecc70f9e1" Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.708430 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:15:12 crc kubenswrapper[4902]: I0121 16:15:12.720105 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-zmb5b"] Jan 21 16:15:13 crc kubenswrapper[4902]: I0121 16:15:13.698285 4902 generic.go:334] "Generic (PLEG): container finished" podID="34cb5d58-0b3f-40eb-a5ee-b8ab812c8008" containerID="dd237234903b1e90630066f8f3b5624819590b40362e32208301679b925d26d9" exitCode=0 Jan 21 16:15:13 crc kubenswrapper[4902]: I0121 16:15:13.698353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerDied","Data":"dd237234903b1e90630066f8f3b5624819590b40362e32208301679b925d26d9"} Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.306818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" path="/var/lib/kubelet/pods/e68cbe1e-2ace-4011-856c-5fa393f45b4b/volumes" Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.722061 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-pr9tl" event={"ID":"34cb5d58-0b3f-40eb-a5ee-b8ab812c8008","Type":"ContainerStarted","Data":"94dc0ddc8010dc24e008edc4ffa025daf08a6bd4d7dead87a3434da79e0eab7d"} Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.722164 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.724598 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerStarted","Data":"ad2399bd80934ddbdfd2e4ae6471b91d072bef38d16c598a25fd79a08cd3c479"} Jan 21 16:15:14 crc kubenswrapper[4902]: I0121 16:15:14.740917 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-pr9tl" podStartSLOduration=4.08707319 podStartE2EDuration="5.740894678s" podCreationTimestamp="2026-01-21 16:15:09 +0000 UTC" firstStartedPulling="2026-01-21 16:15:10.209502734 +0000 UTC m=+6072.286335763" lastFinishedPulling="2026-01-21 16:15:11.863324222 +0000 UTC m=+6073.940157251" observedRunningTime="2026-01-21 16:15:14.740030433 +0000 UTC m=+6076.816863482" watchObservedRunningTime="2026-01-21 16:15:14.740894678 +0000 UTC m=+6076.817727707" Jan 21 16:15:15 crc kubenswrapper[4902]: I0121 16:15:15.738022 4902 generic.go:334] "Generic (PLEG): container finished" podID="646b20f3-5a05-4352-9645-69bed7f67dae" containerID="ad2399bd80934ddbdfd2e4ae6471b91d072bef38d16c598a25fd79a08cd3c479" exitCode=0 Jan 21 16:15:15 crc kubenswrapper[4902]: I0121 16:15:15.738173 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerDied","Data":"ad2399bd80934ddbdfd2e4ae6471b91d072bef38d16c598a25fd79a08cd3c479"} Jan 21 16:15:16 crc kubenswrapper[4902]: I0121 16:15:16.749722 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-drv9p" event={"ID":"646b20f3-5a05-4352-9645-69bed7f67dae","Type":"ContainerStarted","Data":"2939b4e11c7b2d6987e7ace53097d49dab09fbb5238c00f5000c6f1e44443cde"} Jan 21 16:15:16 crc kubenswrapper[4902]: I0121 16:15:16.751240 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:16 crc kubenswrapper[4902]: I0121 16:15:16.773946 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-drv9p" podStartSLOduration=5.422267501 podStartE2EDuration="6.773920758s" podCreationTimestamp="2026-01-21 16:15:10 +0000 UTC" firstStartedPulling="2026-01-21 16:15:12.358910978 +0000 UTC m=+6074.435744017" lastFinishedPulling="2026-01-21 16:15:13.710564245 +0000 UTC m=+6075.787397274" observedRunningTime="2026-01-21 16:15:16.770905144 +0000 UTC m=+6078.847738203" watchObservedRunningTime="2026-01-21 16:15:16.773920758 +0000 UTC m=+6078.850753807" Jan 21 16:15:19 crc kubenswrapper[4902]: I0121 16:15:19.295279 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:19 crc kubenswrapper[4902]: E0121 16:15:19.295808 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:22 crc kubenswrapper[4902]: I0121 16:15:22.378490 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-vtnkx" Jan 21 16:15:24 crc kubenswrapper[4902]: I0121 16:15:24.572281 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-pr9tl" Jan 21 16:15:26 crc kubenswrapper[4902]: I0121 16:15:26.111017 4902 scope.go:117] "RemoveContainer" containerID="2d74f71a998726973b118e0b0755aa5903f2b68cb19dc4c893a565df10186a56" Jan 21 16:15:26 crc kubenswrapper[4902]: I0121 16:15:26.286259 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-drv9p" Jan 21 16:15:31 crc kubenswrapper[4902]: I0121 16:15:31.295237 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:31 crc kubenswrapper[4902]: E0121 16:15:31.296231 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.803093 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:42 crc kubenswrapper[4902]: E0121 16:15:42.803893 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.803904 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" Jan 21 16:15:42 crc kubenswrapper[4902]: E0121 16:15:42.803913 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="init" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.803918 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="init" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.804103 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68cbe1e-2ace-4011-856c-5fa393f45b4b" containerName="octavia-amphora-httpd" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.805455 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.815643 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.892141 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.892357 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.892864 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.996008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.996090 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.996164 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.997275 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:42 crc kubenswrapper[4902]: I0121 16:15:42.998901 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:43 crc kubenswrapper[4902]: I0121 16:15:43.022622 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"community-operators-s5tqc\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:43 crc kubenswrapper[4902]: I0121 16:15:43.130128 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:43 crc kubenswrapper[4902]: I0121 16:15:43.594927 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:44 crc kubenswrapper[4902]: I0121 16:15:44.024159 4902 generic.go:334] "Generic (PLEG): container finished" podID="a2335693-82d8-44d9-93cb-8845da187fc4" containerID="0f5fdb1f77ee5e53923e8edceba05628177b711a2533fe02fb33769c82576bcf" exitCode=0 Jan 21 16:15:44 crc kubenswrapper[4902]: I0121 16:15:44.024204 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"0f5fdb1f77ee5e53923e8edceba05628177b711a2533fe02fb33769c82576bcf"} Jan 21 16:15:44 crc kubenswrapper[4902]: I0121 16:15:44.024231 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerStarted","Data":"0f43d77e4d6e94ad35f67feaf5658b556c28c0b92a382bc5e8acbbd44673a54c"} Jan 21 16:15:45 crc kubenswrapper[4902]: I0121 16:15:45.035410 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerStarted","Data":"d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c"} Jan 21 16:15:46 crc kubenswrapper[4902]: I0121 16:15:46.056289 4902 generic.go:334] "Generic (PLEG): container finished" podID="a2335693-82d8-44d9-93cb-8845da187fc4" containerID="d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c" exitCode=0 Jan 21 16:15:46 crc kubenswrapper[4902]: I0121 16:15:46.056588 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c"} Jan 21 16:15:46 crc kubenswrapper[4902]: I0121 16:15:46.300661 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:15:46 crc kubenswrapper[4902]: E0121 16:15:46.302147 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:15:47 crc kubenswrapper[4902]: I0121 16:15:47.069737 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerStarted","Data":"4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb"} Jan 21 16:15:47 crc kubenswrapper[4902]: I0121 16:15:47.099634 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s5tqc" podStartSLOduration=2.6188767139999998 podStartE2EDuration="5.099613863s" podCreationTimestamp="2026-01-21 16:15:42 +0000 UTC" firstStartedPulling="2026-01-21 16:15:44.026625588 +0000 UTC m=+6106.103458617" lastFinishedPulling="2026-01-21 16:15:46.507362747 +0000 UTC m=+6108.584195766" observedRunningTime="2026-01-21 16:15:47.092485203 +0000 UTC m=+6109.169318232" watchObservedRunningTime="2026-01-21 16:15:47.099613863 +0000 UTC m=+6109.176446902" Jan 21 16:15:53 crc kubenswrapper[4902]: I0121 16:15:53.130622 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:53 crc kubenswrapper[4902]: I0121 16:15:53.131181 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:53 crc kubenswrapper[4902]: I0121 16:15:53.189580 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:54 crc kubenswrapper[4902]: I0121 16:15:54.179481 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:54 crc kubenswrapper[4902]: I0121 16:15:54.238031 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:56 crc kubenswrapper[4902]: I0121 16:15:56.187875 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s5tqc" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" containerID="cri-o://4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb" gracePeriod=2 Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198026 4902 generic.go:334] "Generic (PLEG): container finished" podID="a2335693-82d8-44d9-93cb-8845da187fc4" containerID="4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb" exitCode=0 Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198127 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb"} Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s5tqc" event={"ID":"a2335693-82d8-44d9-93cb-8845da187fc4","Type":"ContainerDied","Data":"0f43d77e4d6e94ad35f67feaf5658b556c28c0b92a382bc5e8acbbd44673a54c"} Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.198360 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f43d77e4d6e94ad35f67feaf5658b556c28c0b92a382bc5e8acbbd44673a54c" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.216476 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.318346 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") pod \"a2335693-82d8-44d9-93cb-8845da187fc4\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.318400 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") pod \"a2335693-82d8-44d9-93cb-8845da187fc4\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.318663 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") pod \"a2335693-82d8-44d9-93cb-8845da187fc4\" (UID: \"a2335693-82d8-44d9-93cb-8845da187fc4\") " Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.319376 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities" (OuterVolumeSpecName: "utilities") pod "a2335693-82d8-44d9-93cb-8845da187fc4" (UID: "a2335693-82d8-44d9-93cb-8845da187fc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.324874 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf" (OuterVolumeSpecName: "kube-api-access-rthxf") pod "a2335693-82d8-44d9-93cb-8845da187fc4" (UID: "a2335693-82d8-44d9-93cb-8845da187fc4"). InnerVolumeSpecName "kube-api-access-rthxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.366000 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2335693-82d8-44d9-93cb-8845da187fc4" (UID: "a2335693-82d8-44d9-93cb-8845da187fc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.421558 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rthxf\" (UniqueName: \"kubernetes.io/projected/a2335693-82d8-44d9-93cb-8845da187fc4-kube-api-access-rthxf\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.421603 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:57 crc kubenswrapper[4902]: I0121 16:15:57.421616 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2335693-82d8-44d9-93cb-8845da187fc4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.208956 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s5tqc" Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.264232 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.286197 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s5tqc"] Jan 21 16:15:58 crc kubenswrapper[4902]: I0121 16:15:58.306507 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" path="/var/lib/kubelet/pods/a2335693-82d8-44d9-93cb-8845da187fc4/volumes" Jan 21 16:16:01 crc kubenswrapper[4902]: I0121 16:16:01.295854 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:01 crc kubenswrapper[4902]: E0121 16:16:01.296953 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.153085 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.156259 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156286 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.156313 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-content" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156321 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-content" Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.156348 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-utilities" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156361 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="extract-utilities" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.156629 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2335693-82d8-44d9-93cb-8845da187fc4" containerName="registry-server" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.157988 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.161060 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vm6cj" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.161077 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.161583 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.162524 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.182338 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.214955 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.215448 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" containerID="cri-o://d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.215619 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" containerID="cri-o://736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.267625 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.267881 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.267920 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.268164 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.268231 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.280303 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.280597 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" containerID="cri-o://b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.281331 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" containerID="cri-o://59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207" gracePeriod=30 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.295288 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:14 crc kubenswrapper[4902]: E0121 16:16:14.295600 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.310431 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.312357 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.321441 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370415 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370638 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370702 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370783 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370883 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370909 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.370959 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.371014 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.371616 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.371914 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.372295 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.377541 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.386447 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"horizon-5998889f69-hx8w9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.403277 4902 generic.go:334] "Generic (PLEG): container finished" podID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerID="d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3" exitCode=143 Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.403332 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerDied","Data":"d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3"} Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472381 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472400 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472523 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.472896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.473161 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.474221 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.476690 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.485175 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.492187 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"horizon-6cdc5859df-vpr9s\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.778973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:14 crc kubenswrapper[4902]: I0121 16:16:14.963003 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.229554 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.417849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerStarted","Data":"6e949b6543efff5451a444f3bd5efb9a9f0312a98d89cdd384669d329bffb82f"} Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.422216 4902 generic.go:334] "Generic (PLEG): container finished" podID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerID="b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c" exitCode=143 Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.422304 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerDied","Data":"b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c"} Jan 21 16:16:15 crc kubenswrapper[4902]: I0121 16:16:15.423853 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerStarted","Data":"4b82e831be0b81da6f10c4a3ff492b70b47b678bffe68000c6d720bf8f6f3d32"} Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.491917 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.523461 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.526303 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.530592 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.541943 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.588754 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.620723 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.624430 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.653228 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.655947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656405 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656606 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656645 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.656873 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.659928 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759259 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759336 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759376 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759437 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759524 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759549 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759594 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759704 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759801 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.759851 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.760484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.763680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.763934 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.765774 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.766411 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.767819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.782803 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"horizon-7dd785d478-plbs7\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.859475 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.863877 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.863940 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864015 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864133 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864203 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864236 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.864740 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.866448 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.876829 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.876838 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.877352 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.878508 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.885881 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"horizon-786f96566b-w596t\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:16 crc kubenswrapper[4902]: I0121 16:16:16.948410 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:17 crc kubenswrapper[4902]: I0121 16:16:17.340435 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:17 crc kubenswrapper[4902]: W0121 16:16:17.343392 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ff7ce_f44c_45d2_ac7c_ddebb604798c.slice/crio-4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581 WatchSource:0}: Error finding container 4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581: Status 404 returned error can't find the container with id 4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581 Jan 21 16:16:17 crc kubenswrapper[4902]: I0121 16:16:17.436105 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:16:17 crc kubenswrapper[4902]: I0121 16:16:17.455071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerStarted","Data":"4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581"} Jan 21 16:16:17 crc kubenswrapper[4902]: W0121 16:16:17.457605 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb772cd9d_83ce_4675_84de_09f40bdcabe3.slice/crio-3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd WatchSource:0}: Error finding container 3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd: Status 404 returned error can't find the container with id 3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.474729 4902 generic.go:334] "Generic (PLEG): container finished" podID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerID="59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207" exitCode=0 Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.475186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerDied","Data":"59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207"} Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.481837 4902 generic.go:334] "Generic (PLEG): container finished" podID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerID="736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3" exitCode=0 Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.481892 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerDied","Data":"736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3"} Jan 21 16:16:18 crc kubenswrapper[4902]: I0121 16:16:18.483422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerStarted","Data":"3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd"} Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.057575 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.070311 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.080409 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0136-account-create-update-k4cmq"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.088605 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-85k9w"] Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.307822 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c6f518-fd8b-4c60-9f36-1eb57bd30b06" path="/var/lib/kubelet/pods/e8c6f518-fd8b-4c60-9f36-1eb57bd30b06/volumes" Jan 21 16:16:20 crc kubenswrapper[4902]: I0121 16:16:20.308864 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8" path="/var/lib/kubelet/pods/edc6e2c0-6737-49e0-b5d8-f77a5de0a7f8/volumes" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.500365 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.508236 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.583434 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a90211b-865e-43ee-a4d2-4435d5377cac","Type":"ContainerDied","Data":"9465028a66213606555e0f8ddd61e53e1a204236d21e0dbf53c9bae174755deb"} Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.583499 4902 scope.go:117] "RemoveContainer" containerID="59aec4d7b002f6bac7cebbdd58347eb07bbd6d976ee19de283329b9b2320f207" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.583677 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.611162 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"621700c2-adff-4cf1-81a4-fb0213e5e919","Type":"ContainerDied","Data":"fb6c969ddf6477f474e95f9c5c6fde9452e3279bb465fbf4b3d1c7ae5b80a349"} Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.611273 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695117 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695284 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695355 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695508 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695573 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695643 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695673 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695744 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695767 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695809 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695841 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695921 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") pod \"621700c2-adff-4cf1-81a4-fb0213e5e919\" (UID: \"621700c2-adff-4cf1-81a4-fb0213e5e919\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.695992 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") pod \"8a90211b-865e-43ee-a4d2-4435d5377cac\" (UID: \"8a90211b-865e-43ee-a4d2-4435d5377cac\") " Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.696799 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs" (OuterVolumeSpecName: "logs") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.696821 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs" (OuterVolumeSpecName: "logs") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.696999 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.697178 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.704887 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx" (OuterVolumeSpecName: "kube-api-access-kkkjx") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "kube-api-access-kkkjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.705079 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts" (OuterVolumeSpecName: "scripts") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.711654 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts" (OuterVolumeSpecName: "scripts") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.716175 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg" (OuterVolumeSpecName: "kube-api-access-jm8hg") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "kube-api-access-jm8hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.735949 4902 scope.go:117] "RemoveContainer" containerID="b5f9108bd4e377347ea43cf1022065cb061fb7505fcb4f124adde97f4fd9fe0c" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.746459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.768308 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.783102 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data" (OuterVolumeSpecName: "config-data") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.792726 4902 scope.go:117] "RemoveContainer" containerID="736f3facc63619fff931156c32623cacaeb743514ad4d9bc998e592c1498cea3" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798573 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798601 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798612 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798623 4902 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/621700c2-adff-4cf1-81a4-fb0213e5e919-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798633 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a90211b-865e-43ee-a4d2-4435d5377cac-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798644 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkkjx\" (UniqueName: \"kubernetes.io/projected/621700c2-adff-4cf1-81a4-fb0213e5e919-kube-api-access-kkkjx\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798654 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798666 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798676 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm8hg\" (UniqueName: \"kubernetes.io/projected/8a90211b-865e-43ee-a4d2-4435d5377cac-kube-api-access-jm8hg\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798687 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.798699 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.805973 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data" (OuterVolumeSpecName: "config-data") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.806257 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8a90211b-865e-43ee-a4d2-4435d5377cac" (UID: "8a90211b-865e-43ee-a4d2-4435d5377cac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.828176 4902 scope.go:117] "RemoveContainer" containerID="d703f5632f2cbf952b8d8487e251807ade66f1d024b3d48fde5f54990b973dc3" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.865019 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "621700c2-adff-4cf1-81a4-fb0213e5e919" (UID: "621700c2-adff-4cf1-81a4-fb0213e5e919"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.901547 4902 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.901596 4902 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a90211b-865e-43ee-a4d2-4435d5377cac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.901610 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621700c2-adff-4cf1-81a4-fb0213e5e919-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.962451 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:22 crc kubenswrapper[4902]: I0121 16:16:22.985180 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.008225 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018184 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018701 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018717 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018734 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018740 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018749 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018756 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: E0121 16:16:23.018763 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018770 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018942 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018960 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018970 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-log" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.018981 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" containerName="glance-httpd" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.020028 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.023185 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.023403 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.023556 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.024064 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mn7jp" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.034671 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.043304 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.056673 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.058629 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.062674 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.062860 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.081115 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215303 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqj6v\" (UniqueName: \"kubernetes.io/projected/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-kube-api-access-tqj6v\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215367 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215446 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215477 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjw5\" (UniqueName: \"kubernetes.io/projected/43059835-649d-40c9-bf13-f46c9d6b65a6-kube-api-access-qcjw5\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215545 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-logs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215583 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215669 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215719 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215814 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215859 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215891 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.215928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318167 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318248 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqj6v\" (UniqueName: \"kubernetes.io/projected/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-kube-api-access-tqj6v\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318288 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318322 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318357 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318384 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjw5\" (UniqueName: \"kubernetes.io/projected/43059835-649d-40c9-bf13-f46c9d6b65a6-kube-api-access-qcjw5\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-logs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318461 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318532 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318580 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318619 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318698 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.318727 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319190 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319261 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43059835-649d-40c9-bf13-f46c9d6b65a6-logs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319511 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.319858 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.326761 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.327317 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.328558 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.329567 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.337103 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.338483 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.338783 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43059835-649d-40c9-bf13-f46c9d6b65a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.339066 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.340988 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjw5\" (UniqueName: \"kubernetes.io/projected/43059835-649d-40c9-bf13-f46c9d6b65a6-kube-api-access-qcjw5\") pod \"glance-default-external-api-0\" (UID: \"43059835-649d-40c9-bf13-f46c9d6b65a6\") " pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.341747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqj6v\" (UniqueName: \"kubernetes.io/projected/a21c1b8f-59f7-445b-bc8a-f8e89d7142e5-kube-api-access-tqj6v\") pod \"glance-default-internal-api-0\" (UID: \"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.356902 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.431209 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.637963 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerStarted","Data":"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.638342 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerStarted","Data":"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.645863 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerStarted","Data":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.645902 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerStarted","Data":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.646006 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cdc5859df-vpr9s" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" containerID="cri-o://65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.646099 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cdc5859df-vpr9s" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" containerID="cri-o://7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.654086 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerStarted","Data":"b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.654121 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerStarted","Data":"dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.665460 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7dd785d478-plbs7" podStartSLOduration=2.553676663 podStartE2EDuration="7.665442399s" podCreationTimestamp="2026-01-21 16:16:16 +0000 UTC" firstStartedPulling="2026-01-21 16:16:17.346012919 +0000 UTC m=+6139.422845958" lastFinishedPulling="2026-01-21 16:16:22.457778665 +0000 UTC m=+6144.534611694" observedRunningTime="2026-01-21 16:16:23.661758046 +0000 UTC m=+6145.738591075" watchObservedRunningTime="2026-01-21 16:16:23.665442399 +0000 UTC m=+6145.742275428" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.669997 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerStarted","Data":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.670081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerStarted","Data":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.670201 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5998889f69-hx8w9" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" containerID="cri-o://60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.670248 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5998889f69-hx8w9" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" containerID="cri-o://ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" gracePeriod=30 Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.697235 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-786f96566b-w596t" podStartSLOduration=2.645509457 podStartE2EDuration="7.697210323s" podCreationTimestamp="2026-01-21 16:16:16 +0000 UTC" firstStartedPulling="2026-01-21 16:16:17.462302071 +0000 UTC m=+6139.539135100" lastFinishedPulling="2026-01-21 16:16:22.514002937 +0000 UTC m=+6144.590835966" observedRunningTime="2026-01-21 16:16:23.6786251 +0000 UTC m=+6145.755458129" watchObservedRunningTime="2026-01-21 16:16:23.697210323 +0000 UTC m=+6145.774043352" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.737168 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5998889f69-hx8w9" podStartSLOduration=2.248045731 podStartE2EDuration="9.737148297s" podCreationTimestamp="2026-01-21 16:16:14 +0000 UTC" firstStartedPulling="2026-01-21 16:16:14.968896085 +0000 UTC m=+6137.045729114" lastFinishedPulling="2026-01-21 16:16:22.457998651 +0000 UTC m=+6144.534831680" observedRunningTime="2026-01-21 16:16:23.734449931 +0000 UTC m=+6145.811282960" watchObservedRunningTime="2026-01-21 16:16:23.737148297 +0000 UTC m=+6145.813981326" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.744497 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cdc5859df-vpr9s" podStartSLOduration=2.4556462740000002 podStartE2EDuration="9.744477014s" podCreationTimestamp="2026-01-21 16:16:14 +0000 UTC" firstStartedPulling="2026-01-21 16:16:15.226269698 +0000 UTC m=+6137.303102747" lastFinishedPulling="2026-01-21 16:16:22.515100458 +0000 UTC m=+6144.591933487" observedRunningTime="2026-01-21 16:16:23.71521062 +0000 UTC m=+6145.792043649" watchObservedRunningTime="2026-01-21 16:16:23.744477014 +0000 UTC m=+6145.821310043" Jan 21 16:16:23 crc kubenswrapper[4902]: I0121 16:16:23.943260 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.053618 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.464865 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621700c2-adff-4cf1-81a4-fb0213e5e919" path="/var/lib/kubelet/pods/621700c2-adff-4cf1-81a4-fb0213e5e919/volumes" Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.466680 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a90211b-865e-43ee-a4d2-4435d5377cac" path="/var/lib/kubelet/pods/8a90211b-865e-43ee-a4d2-4435d5377cac/volumes" Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.486054 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.687962 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5","Type":"ContainerStarted","Data":"9cfdffd840702e0e6c02aa2ed7cbe476730e81f72919f796c530733b81c2799e"} Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.689584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43059835-649d-40c9-bf13-f46c9d6b65a6","Type":"ContainerStarted","Data":"1e691a386ce0ecf5954a8e4ca1743adc7e19bac2c3d561d3031c3780f28a4e42"} Jan 21 16:16:24 crc kubenswrapper[4902]: I0121 16:16:24.780100 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:25 crc kubenswrapper[4902]: I0121 16:16:25.700573 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43059835-649d-40c9-bf13-f46c9d6b65a6","Type":"ContainerStarted","Data":"920df749f27a24b9a35bb974f78f3bdaa0871ed3eb4daa706c1cfd5b95ffdd08"} Jan 21 16:16:25 crc kubenswrapper[4902]: I0121 16:16:25.708726 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5","Type":"ContainerStarted","Data":"763c1eb0c0214c1476a67572a18370dd785222d6aa922d4881d3926263c40c17"} Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.198139 4902 scope.go:117] "RemoveContainer" containerID="e7ae920f7061533fd1ae5c5eabfd18124e9c27f0aad7594a5b9ba20211753b38" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.240147 4902 scope.go:117] "RemoveContainer" containerID="c71cec8eacda47056c7a215f2b04bc9d493e2cbfdf871841495ef07bfb7eb7a5" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.294960 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:26 crc kubenswrapper[4902]: E0121 16:16:26.295309 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.722937 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43059835-649d-40c9-bf13-f46c9d6b65a6","Type":"ContainerStarted","Data":"f841d7ec3a945d5629fca6062dbd1bdbf1b8d411ab3b317f83b99177f1306350"} Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.725647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a21c1b8f-59f7-445b-bc8a-f8e89d7142e5","Type":"ContainerStarted","Data":"3c71f8800c06357ab47db802223123c8f2e58c9e194b664e56104f1339bdbacf"} Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.762320 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.762295666 podStartE2EDuration="4.762295666s" podCreationTimestamp="2026-01-21 16:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:26.743670402 +0000 UTC m=+6148.820503451" watchObservedRunningTime="2026-01-21 16:16:26.762295666 +0000 UTC m=+6148.839128695" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.778721 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.778697237 podStartE2EDuration="4.778697237s" podCreationTimestamp="2026-01-21 16:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:26.768441979 +0000 UTC m=+6148.845275008" watchObservedRunningTime="2026-01-21 16:16:26.778697237 +0000 UTC m=+6148.855530266" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.860935 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.861000 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.949229 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:26 crc kubenswrapper[4902]: I0121 16:16:26.949294 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:27 crc kubenswrapper[4902]: I0121 16:16:27.060157 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:16:27 crc kubenswrapper[4902]: I0121 16:16:27.072291 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7k4p6"] Jan 21 16:16:28 crc kubenswrapper[4902]: I0121 16:16:28.307569 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2" path="/var/lib/kubelet/pods/58a9fed4-e340-4ac7-a3a6-750ce7aa3ad2/volumes" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.358677 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.359464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.402618 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.421879 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.437405 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.438358 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.492312 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:16:33 crc kubenswrapper[4902]: I0121 16:16:33.506643 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.164773 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.165110 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.171097 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:16:34 crc kubenswrapper[4902]: I0121 16:16:34.171537 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.181131 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.181373 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.359930 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.372558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.372664 4902 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.417392 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.468071 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.861823 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:16:36 crc kubenswrapper[4902]: I0121 16:16:36.950957 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.384863 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.416209 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.416365 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.515327 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.515839 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.516060 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.577782 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.583156 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.592488 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620233 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620328 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620391 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620435 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.620658 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.621235 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.621732 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.659781 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"redhat-marketplace-rgzx7\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723232 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723854 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.723888 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.724350 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.737766 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.748563 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"redhat-operators-qqcwn\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:37 crc kubenswrapper[4902]: I0121 16:16:37.908505 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:38 crc kubenswrapper[4902]: I0121 16:16:38.249635 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:38 crc kubenswrapper[4902]: I0121 16:16:38.465225 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.245535 4902 generic.go:334] "Generic (PLEG): container finished" podID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerID="a5edfafdeacf21f426cc5bd6281b1cd868d12717fac023895ab55ea3fbcafc1e" exitCode=0 Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.245649 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"a5edfafdeacf21f426cc5bd6281b1cd868d12717fac023895ab55ea3fbcafc1e"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.245851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerStarted","Data":"7431d53a67478bc2afebf8017741886b32171eb148f4ba07b41d4ca2523dd676"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.247845 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerID="3978acdb017791c813d4f5337aced828704d5e523e45d415b1601e3ec73ed790" exitCode=0 Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.247890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"3978acdb017791c813d4f5337aced828704d5e523e45d415b1601e3ec73ed790"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.247917 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerStarted","Data":"dee8922d5520e1f0c611f74ca26c6dc79b0da6d5a6e133ff12dfae78dbe2c30a"} Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.296071 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:39 crc kubenswrapper[4902]: E0121 16:16:39.296397 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.778372 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.780495 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.788258 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.788571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.788620 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.795459 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.890475 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.890528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.890555 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.891169 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.891191 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:39 crc kubenswrapper[4902]: I0121 16:16:39.913598 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"certified-operators-v5hnt\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:40 crc kubenswrapper[4902]: I0121 16:16:40.106571 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:40 crc kubenswrapper[4902]: I0121 16:16:40.750386 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:16:40 crc kubenswrapper[4902]: W0121 16:16:40.764019 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9 WatchSource:0}: Error finding container ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9: Status 404 returned error can't find the container with id ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9 Jan 21 16:16:41 crc kubenswrapper[4902]: I0121 16:16:41.270253 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9"} Jan 21 16:16:41 crc kubenswrapper[4902]: I0121 16:16:41.272284 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerStarted","Data":"7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1"} Jan 21 16:16:41 crc kubenswrapper[4902]: I0121 16:16:41.274464 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerStarted","Data":"0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c"} Jan 21 16:16:42 crc kubenswrapper[4902]: I0121 16:16:42.289681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97"} Jan 21 16:16:42 crc kubenswrapper[4902]: I0121 16:16:42.304822 4902 generic.go:334] "Generic (PLEG): container finished" podID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerID="7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1" exitCode=0 Jan 21 16:16:42 crc kubenswrapper[4902]: I0121 16:16:42.320929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1"} Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.319261 4902 generic.go:334] "Generic (PLEG): container finished" podID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" exitCode=0 Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.319371 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97"} Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.323890 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerID="0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c" exitCode=0 Jan 21 16:16:43 crc kubenswrapper[4902]: I0121 16:16:43.323929 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c"} Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.344117 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerStarted","Data":"69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5"} Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.349442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerStarted","Data":"ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630"} Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.391189 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqcwn" podStartSLOduration=2.615243393 podStartE2EDuration="7.391172229s" podCreationTimestamp="2026-01-21 16:16:37 +0000 UTC" firstStartedPulling="2026-01-21 16:16:39.249653265 +0000 UTC m=+6161.326486294" lastFinishedPulling="2026-01-21 16:16:44.025582101 +0000 UTC m=+6166.102415130" observedRunningTime="2026-01-21 16:16:44.384637875 +0000 UTC m=+6166.461470914" watchObservedRunningTime="2026-01-21 16:16:44.391172229 +0000 UTC m=+6166.468005258" Jan 21 16:16:44 crc kubenswrapper[4902]: I0121 16:16:44.391421 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgzx7" podStartSLOduration=3.892141964 podStartE2EDuration="7.391415895s" podCreationTimestamp="2026-01-21 16:16:37 +0000 UTC" firstStartedPulling="2026-01-21 16:16:39.248602525 +0000 UTC m=+6161.325435554" lastFinishedPulling="2026-01-21 16:16:42.747876456 +0000 UTC m=+6164.824709485" observedRunningTime="2026-01-21 16:16:44.365354782 +0000 UTC m=+6166.442187811" watchObservedRunningTime="2026-01-21 16:16:44.391415895 +0000 UTC m=+6166.468248914" Jan 21 16:16:45 crc kubenswrapper[4902]: I0121 16:16:45.361033 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139"} Jan 21 16:16:46 crc kubenswrapper[4902]: I0121 16:16:46.374005 4902 generic.go:334] "Generic (PLEG): container finished" podID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" exitCode=0 Jan 21 16:16:46 crc kubenswrapper[4902]: I0121 16:16:46.374307 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139"} Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.393376 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerStarted","Data":"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb"} Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.424588 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v5hnt" podStartSLOduration=4.9246807409999995 podStartE2EDuration="8.424564369s" podCreationTimestamp="2026-01-21 16:16:39 +0000 UTC" firstStartedPulling="2026-01-21 16:16:43.321616842 +0000 UTC m=+6165.398449881" lastFinishedPulling="2026-01-21 16:16:46.82150048 +0000 UTC m=+6168.898333509" observedRunningTime="2026-01-21 16:16:47.419493336 +0000 UTC m=+6169.496326375" watchObservedRunningTime="2026-01-21 16:16:47.424564369 +0000 UTC m=+6169.501397398" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.738231 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.738562 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.793105 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.909009 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:47 crc kubenswrapper[4902]: I0121 16:16:47.909945 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.458832 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.731248 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.904702 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:48 crc kubenswrapper[4902]: I0121 16:16:48.954362 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqcwn" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" probeResult="failure" output=< Jan 21 16:16:48 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:16:48 crc kubenswrapper[4902]: > Jan 21 16:16:49 crc kubenswrapper[4902]: I0121 16:16:49.050020 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:16:49 crc kubenswrapper[4902]: I0121 16:16:49.070912 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w8j46"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.039772 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.048600 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cb7a-account-create-update-qqdxl"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.108382 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.108447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.158807 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.305189 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b91136e9-5bad-4d5c-8eff-8a77985a1726" path="/var/lib/kubelet/pods/b91136e9-5bad-4d5c-8eff-8a77985a1726/volumes" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.305966 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ecff7c-0bbc-47c7-82b4-fbdce132c94b" path="/var/lib/kubelet/pods/d3ecff7c-0bbc-47c7-82b4-fbdce132c94b/volumes" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.368644 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.835901 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.938251 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.938532 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" containerID="cri-o://4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" gracePeriod=30 Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.939145 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" containerID="cri-o://f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" gracePeriod=30 Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.981179 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:50 crc kubenswrapper[4902]: I0121 16:16:50.981667 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgzx7" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" containerID="cri-o://69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5" gracePeriod=2 Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.436366 4902 generic.go:334] "Generic (PLEG): container finished" podID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerID="69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5" exitCode=0 Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.436445 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5"} Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.437509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgzx7" event={"ID":"e51e251d-3170-44e4-aaf6-4d288115b5c3","Type":"ContainerDied","Data":"7431d53a67478bc2afebf8017741886b32171eb148f4ba07b41d4ca2523dd676"} Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.437588 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7431d53a67478bc2afebf8017741886b32171eb148f4ba07b41d4ca2523dd676" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.494596 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.557077 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") pod \"e51e251d-3170-44e4-aaf6-4d288115b5c3\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.557219 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") pod \"e51e251d-3170-44e4-aaf6-4d288115b5c3\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.557751 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") pod \"e51e251d-3170-44e4-aaf6-4d288115b5c3\" (UID: \"e51e251d-3170-44e4-aaf6-4d288115b5c3\") " Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.558015 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities" (OuterVolumeSpecName: "utilities") pod "e51e251d-3170-44e4-aaf6-4d288115b5c3" (UID: "e51e251d-3170-44e4-aaf6-4d288115b5c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.558995 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.563154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq" (OuterVolumeSpecName: "kube-api-access-7z8nq") pod "e51e251d-3170-44e4-aaf6-4d288115b5c3" (UID: "e51e251d-3170-44e4-aaf6-4d288115b5c3"). InnerVolumeSpecName "kube-api-access-7z8nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.583123 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e51e251d-3170-44e4-aaf6-4d288115b5c3" (UID: "e51e251d-3170-44e4-aaf6-4d288115b5c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.661222 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z8nq\" (UniqueName: \"kubernetes.io/projected/e51e251d-3170-44e4-aaf6-4d288115b5c3-kube-api-access-7z8nq\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4902]: I0121 16:16:51.661259 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e51e251d-3170-44e4-aaf6-4d288115b5c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:52 crc kubenswrapper[4902]: I0121 16:16:52.451019 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgzx7" Jan 21 16:16:52 crc kubenswrapper[4902]: I0121 16:16:52.479574 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:52 crc kubenswrapper[4902]: I0121 16:16:52.495088 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgzx7"] Jan 21 16:16:53 crc kubenswrapper[4902]: I0121 16:16:53.294850 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:16:53 crc kubenswrapper[4902]: E0121 16:16:53.295659 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.185701 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.197973 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.304767 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" path="/var/lib/kubelet/pods/e51e251d-3170-44e4-aaf6-4d288115b5c3/volumes" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322115 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322559 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322713 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322865 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.322979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.323072 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.323420 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs" (OuterVolumeSpecName: "logs") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324321 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") pod \"941246aa-c88c-4447-95a9-0efe08817612\" (UID: \"941246aa-c88c-4447-95a9-0efe08817612\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324407 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324573 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.324741 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") pod \"1a336745-0278-402d-b4c1-3b58f8fa66e9\" (UID: \"1a336745-0278-402d-b4c1-3b58f8fa66e9\") " Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.325506 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/941246aa-c88c-4447-95a9-0efe08817612-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.327870 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs" (OuterVolumeSpecName: "logs") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.328059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.328336 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg" (OuterVolumeSpecName: "kube-api-access-zzbxg") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "kube-api-access-zzbxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.329592 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.330111 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz" (OuterVolumeSpecName: "kube-api-access-npmcz") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "kube-api-access-npmcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.347925 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data" (OuterVolumeSpecName: "config-data") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.351583 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data" (OuterVolumeSpecName: "config-data") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.359410 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts" (OuterVolumeSpecName: "scripts") pod "941246aa-c88c-4447-95a9-0efe08817612" (UID: "941246aa-c88c-4447-95a9-0efe08817612"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.362191 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts" (OuterVolumeSpecName: "scripts") pod "1a336745-0278-402d-b4c1-3b58f8fa66e9" (UID: "1a336745-0278-402d-b4c1-3b58f8fa66e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428059 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/941246aa-c88c-4447-95a9-0efe08817612-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428096 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428106 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428114 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1a336745-0278-402d-b4c1-3b58f8fa66e9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428122 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941246aa-c88c-4447-95a9-0efe08817612-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428130 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npmcz\" (UniqueName: \"kubernetes.io/projected/1a336745-0278-402d-b4c1-3b58f8fa66e9-kube-api-access-npmcz\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428139 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a336745-0278-402d-b4c1-3b58f8fa66e9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428149 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a336745-0278-402d-b4c1-3b58f8fa66e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.428162 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzbxg\" (UniqueName: \"kubernetes.io/projected/941246aa-c88c-4447-95a9-0efe08817612-kube-api-access-zzbxg\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.485369 4902 generic.go:334] "Generic (PLEG): container finished" podID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" exitCode=0 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.485422 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerDied","Data":"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487211 4902 generic.go:334] "Generic (PLEG): container finished" podID="941246aa-c88c-4447-95a9-0efe08817612" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487233 4902 generic.go:334] "Generic (PLEG): container finished" podID="941246aa-c88c-4447-95a9-0efe08817612" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487260 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerDied","Data":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487278 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerDied","Data":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487288 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdc5859df-vpr9s" event={"ID":"941246aa-c88c-4447-95a9-0efe08817612","Type":"ContainerDied","Data":"4b82e831be0b81da6f10c4a3ff492b70b47b678bffe68000c6d720bf8f6f3d32"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487303 4902 scope.go:117] "RemoveContainer" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.487420 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdc5859df-vpr9s" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.493704 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.493983 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" exitCode=137 Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494093 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerDied","Data":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerDied","Data":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5998889f69-hx8w9" event={"ID":"1a336745-0278-402d-b4c1-3b58f8fa66e9","Type":"ContainerDied","Data":"6e949b6543efff5451a444f3bd5efb9a9f0312a98d89cdd384669d329bffb82f"} Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.494401 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5998889f69-hx8w9" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.526353 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.537077 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cdc5859df-vpr9s"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.553371 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.591096 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5998889f69-hx8w9"] Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.717248 4902 scope.go:117] "RemoveContainer" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.793393 4902 scope.go:117] "RemoveContainer" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: E0121 16:16:54.793872 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": container with ID starting with 7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d not found: ID does not exist" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.793929 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} err="failed to get container status \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": rpc error: code = NotFound desc = could not find container \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": container with ID starting with 7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.793957 4902 scope.go:117] "RemoveContainer" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: E0121 16:16:54.794534 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": container with ID starting with 65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4 not found: ID does not exist" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794581 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} err="failed to get container status \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": rpc error: code = NotFound desc = could not find container \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": container with ID starting with 65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4 not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794615 4902 scope.go:117] "RemoveContainer" containerID="7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794909 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d"} err="failed to get container status \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": rpc error: code = NotFound desc = could not find container \"7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d\": container with ID starting with 7d91343ee4259a1202b062684b95f150f03a3353e2dde4c517eeba3e47bcbf2d not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.794931 4902 scope.go:117] "RemoveContainer" containerID="65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.795319 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4"} err="failed to get container status \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": rpc error: code = NotFound desc = could not find container \"65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4\": container with ID starting with 65f85a180e9100c74f4b8f917e063a5ffff2dd7928811a03e959acb178a2bdb4 not found: ID does not exist" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.795346 4902 scope.go:117] "RemoveContainer" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:54 crc kubenswrapper[4902]: I0121 16:16:54.986594 4902 scope.go:117] "RemoveContainer" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.008902 4902 scope.go:117] "RemoveContainer" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:55 crc kubenswrapper[4902]: E0121 16:16:55.009421 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": container with ID starting with ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2 not found: ID does not exist" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.009472 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} err="failed to get container status \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": rpc error: code = NotFound desc = could not find container \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": container with ID starting with ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2 not found: ID does not exist" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.009498 4902 scope.go:117] "RemoveContainer" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: E0121 16:16:55.010106 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": container with ID starting with 60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890 not found: ID does not exist" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010145 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} err="failed to get container status \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": rpc error: code = NotFound desc = could not find container \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": container with ID starting with 60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890 not found: ID does not exist" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010173 4902 scope.go:117] "RemoveContainer" containerID="ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010454 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2"} err="failed to get container status \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": rpc error: code = NotFound desc = could not find container \"ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2\": container with ID starting with ed9ed4d43a210194226a1ae8dd3e64069f5d64038b1a9a22de5ace65be7d3ae2 not found: ID does not exist" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010483 4902 scope.go:117] "RemoveContainer" containerID="60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890" Jan 21 16:16:55 crc kubenswrapper[4902]: I0121 16:16:55.010753 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890"} err="failed to get container status \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": rpc error: code = NotFound desc = could not find container \"60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890\": container with ID starting with 60d067fcbdca3b6d3fd9c0167392feff9791cc9df860a03393b346728c7ab890 not found: ID does not exist" Jan 21 16:16:56 crc kubenswrapper[4902]: I0121 16:16:56.308935 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" path="/var/lib/kubelet/pods/1a336745-0278-402d-b4c1-3b58f8fa66e9/volumes" Jan 21 16:16:56 crc kubenswrapper[4902]: I0121 16:16:56.310247 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941246aa-c88c-4447-95a9-0efe08817612" path="/var/lib/kubelet/pods/941246aa-c88c-4447-95a9-0efe08817612/volumes" Jan 21 16:16:56 crc kubenswrapper[4902]: I0121 16:16:56.860670 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.005975 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6845bd7746-jd2dk"] Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006345 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006358 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006372 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006378 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006391 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006398 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006415 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-utilities" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006421 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-utilities" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006437 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-content" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006443 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="extract-content" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006453 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006459 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: E0121 16:16:58.006472 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006479 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006666 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006680 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a336745-0278-402d-b4c1-3b58f8fa66e9" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006698 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51e251d-3170-44e4-aaf6-4d288115b5c3" containerName="registry-server" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006707 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.006719 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="941246aa-c88c-4447-95a9-0efe08817612" containerName="horizon-log" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.007680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.026397 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6845bd7746-jd2dk"] Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.068141 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.078789 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fj4nd"] Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.104665 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-tls-certs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.104994 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhtc\" (UniqueName: \"kubernetes.io/projected/d71e079c-1163-4e7e-ac94-0e92a0b602ad-kube-api-access-lmhtc\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105184 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-config-data\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105392 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71e079c-1163-4e7e-ac94-0e92a0b602ad-logs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105443 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-secret-key\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105500 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-combined-ca-bundle\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.105595 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-scripts\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208394 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71e079c-1163-4e7e-ac94-0e92a0b602ad-logs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208452 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-secret-key\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-combined-ca-bundle\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208554 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-scripts\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208663 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-tls-certs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208712 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhtc\" (UniqueName: \"kubernetes.io/projected/d71e079c-1163-4e7e-ac94-0e92a0b602ad-kube-api-access-lmhtc\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.208740 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-config-data\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.209612 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-scripts\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.209869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71e079c-1163-4e7e-ac94-0e92a0b602ad-logs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.210147 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71e079c-1163-4e7e-ac94-0e92a0b602ad-config-data\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.215110 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-combined-ca-bundle\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.216424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-secret-key\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.217418 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71e079c-1163-4e7e-ac94-0e92a0b602ad-horizon-tls-certs\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.232564 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhtc\" (UniqueName: \"kubernetes.io/projected/d71e079c-1163-4e7e-ac94-0e92a0b602ad-kube-api-access-lmhtc\") pod \"horizon-6845bd7746-jd2dk\" (UID: \"d71e079c-1163-4e7e-ac94-0e92a0b602ad\") " pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.333833 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:16:58 crc kubenswrapper[4902]: I0121 16:16:58.350015 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a9d8bd-92b5-42ef-b945-6b3ccc65b48b" path="/var/lib/kubelet/pods/97a9d8bd-92b5-42ef-b945-6b3ccc65b48b/volumes" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:58.664519 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6845bd7746-jd2dk"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:58.978013 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqcwn" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" probeResult="failure" output=< Jan 21 16:16:59 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:16:59 crc kubenswrapper[4902]: > Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.422407 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.423857 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.446875 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.540553 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.540817 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.611383 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6845bd7746-jd2dk" event={"ID":"d71e079c-1163-4e7e-ac94-0e92a0b602ad","Type":"ContainerStarted","Data":"74319219889ae992328ba3bb88887722d9a59a9fb9b6d7bc638fc9ef5c4bbc13"} Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.611440 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6845bd7746-jd2dk" event={"ID":"d71e079c-1163-4e7e-ac94-0e92a0b602ad","Type":"ContainerStarted","Data":"45961fffe29aeb66e030e936b7281223fa1ca4ff3c15da5cb24296c85e4cc52f"} Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.611456 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6845bd7746-jd2dk" event={"ID":"d71e079c-1163-4e7e-ac94-0e92a0b602ad","Type":"ContainerStarted","Data":"22d498506777dee603fed38e172392f2d8ac28ff01f8b9ba12ba8e248aa24e72"} Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.643038 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.643101 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.644789 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.649713 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.651094 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.653633 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.665085 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.665517 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6845bd7746-jd2dk" podStartSLOduration=2.6654939239999997 podStartE2EDuration="2.665493924s" podCreationTimestamp="2026-01-21 16:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:59.644366689 +0000 UTC m=+6181.721199718" watchObservedRunningTime="2026-01-21 16:16:59.665493924 +0000 UTC m=+6181.742326953" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.684094 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"heat-db-create-bjrq8\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.745120 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.745180 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.806661 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.847714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.848036 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.848587 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.872731 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"heat-11d1-account-create-update-c7r42\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:16:59 crc kubenswrapper[4902]: I0121 16:16:59.970705 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.187692 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.272128 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.364394 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.515471 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.651552 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerStarted","Data":"745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f"} Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.651922 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerStarted","Data":"a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795"} Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.654486 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-11d1-account-create-update-c7r42" event={"ID":"ff9e17b7-5e08-4042-9b1b-ccad64651eef","Type":"ContainerStarted","Data":"8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb"} Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.654821 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v5hnt" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" containerID="cri-o://92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" gracePeriod=2 Jan 21 16:17:00 crc kubenswrapper[4902]: I0121 16:17:00.672491 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-bjrq8" podStartSLOduration=1.672467591 podStartE2EDuration="1.672467591s" podCreationTimestamp="2026-01-21 16:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:00.669211229 +0000 UTC m=+6182.746044258" watchObservedRunningTime="2026-01-21 16:17:00.672467591 +0000 UTC m=+6182.749300620" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.025683 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.074506 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") pod \"91829544-e720-43f3-b3dd-3f1240beb6f6\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.074709 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") pod \"91829544-e720-43f3-b3dd-3f1240beb6f6\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.074753 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") pod \"91829544-e720-43f3-b3dd-3f1240beb6f6\" (UID: \"91829544-e720-43f3-b3dd-3f1240beb6f6\") " Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.077366 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities" (OuterVolumeSpecName: "utilities") pod "91829544-e720-43f3-b3dd-3f1240beb6f6" (UID: "91829544-e720-43f3-b3dd-3f1240beb6f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.082457 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl" (OuterVolumeSpecName: "kube-api-access-bk5jl") pod "91829544-e720-43f3-b3dd-3f1240beb6f6" (UID: "91829544-e720-43f3-b3dd-3f1240beb6f6"). InnerVolumeSpecName "kube-api-access-bk5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.128627 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91829544-e720-43f3-b3dd-3f1240beb6f6" (UID: "91829544-e720-43f3-b3dd-3f1240beb6f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.177136 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.177179 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91829544-e720-43f3-b3dd-3f1240beb6f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.177192 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk5jl\" (UniqueName: \"kubernetes.io/projected/91829544-e720-43f3-b3dd-3f1240beb6f6-kube-api-access-bk5jl\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.664768 4902 generic.go:334] "Generic (PLEG): container finished" podID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" exitCode=0 Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.664896 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5hnt" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.664921 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.669118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5hnt" event={"ID":"91829544-e720-43f3-b3dd-3f1240beb6f6","Type":"ContainerDied","Data":"ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.669154 4902 scope.go:117] "RemoveContainer" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.671527 4902 generic.go:334] "Generic (PLEG): container finished" podID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerID="745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f" exitCode=0 Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.671584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerDied","Data":"745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.677610 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerID="8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482" exitCode=0 Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.677665 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-11d1-account-create-update-c7r42" event={"ID":"ff9e17b7-5e08-4042-9b1b-ccad64651eef","Type":"ContainerDied","Data":"8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482"} Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.700947 4902 scope.go:117] "RemoveContainer" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.724195 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.727728 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v5hnt"] Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.736984 4902 scope.go:117] "RemoveContainer" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.769462 4902 scope.go:117] "RemoveContainer" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" Jan 21 16:17:01 crc kubenswrapper[4902]: E0121 16:17:01.769945 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb\": container with ID starting with 92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb not found: ID does not exist" containerID="92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.769977 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb"} err="failed to get container status \"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb\": rpc error: code = NotFound desc = could not find container \"92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb\": container with ID starting with 92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb not found: ID does not exist" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770000 4902 scope.go:117] "RemoveContainer" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" Jan 21 16:17:01 crc kubenswrapper[4902]: E0121 16:17:01.770273 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139\": container with ID starting with 0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139 not found: ID does not exist" containerID="0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770325 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139"} err="failed to get container status \"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139\": rpc error: code = NotFound desc = could not find container \"0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139\": container with ID starting with 0b5ea8e0864ea4a4799bd6a5f588f36cddb6c93d274d93f03c01db02fde37139 not found: ID does not exist" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770340 4902 scope.go:117] "RemoveContainer" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" Jan 21 16:17:01 crc kubenswrapper[4902]: E0121 16:17:01.770587 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97\": container with ID starting with 6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97 not found: ID does not exist" containerID="6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97" Jan 21 16:17:01 crc kubenswrapper[4902]: I0121 16:17:01.770607 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97"} err="failed to get container status \"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97\": rpc error: code = NotFound desc = could not find container \"6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97\": container with ID starting with 6329a41f5316fabec7507736cb3292b9c613095ba918a47cee6ecb04cb936c97 not found: ID does not exist" Jan 21 16:17:02 crc kubenswrapper[4902]: I0121 16:17:02.310620 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" path="/var/lib/kubelet/pods/91829544-e720-43f3-b3dd-3f1240beb6f6/volumes" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.204023 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.211696 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316526 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") pod \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316674 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") pod \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316720 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") pod \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\" (UID: \"ff9e17b7-5e08-4042-9b1b-ccad64651eef\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.316849 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") pod \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\" (UID: \"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c\") " Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317232 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff9e17b7-5e08-4042-9b1b-ccad64651eef" (UID: "ff9e17b7-5e08-4042-9b1b-ccad64651eef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317619 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" (UID: "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317905 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff9e17b7-5e08-4042-9b1b-ccad64651eef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.317932 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.323710 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x" (OuterVolumeSpecName: "kube-api-access-rlb9x") pod "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" (UID: "217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c"). InnerVolumeSpecName "kube-api-access-rlb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.324506 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz" (OuterVolumeSpecName: "kube-api-access-cbhnz") pod "ff9e17b7-5e08-4042-9b1b-ccad64651eef" (UID: "ff9e17b7-5e08-4042-9b1b-ccad64651eef"). InnerVolumeSpecName "kube-api-access-cbhnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.420233 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlb9x\" (UniqueName: \"kubernetes.io/projected/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c-kube-api-access-rlb9x\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.420263 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbhnz\" (UniqueName: \"kubernetes.io/projected/ff9e17b7-5e08-4042-9b1b-ccad64651eef-kube-api-access-cbhnz\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.705219 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-11d1-account-create-update-c7r42" event={"ID":"ff9e17b7-5e08-4042-9b1b-ccad64651eef","Type":"ContainerDied","Data":"8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb"} Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.705253 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-11d1-account-create-update-c7r42" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.705278 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.710888 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-bjrq8" event={"ID":"217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c","Type":"ContainerDied","Data":"a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795"} Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.710938 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795" Jan 21 16:17:03 crc kubenswrapper[4902]: I0121 16:17:03.710993 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-bjrq8" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794133 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794902 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-utilities" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794917 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-utilities" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794927 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerName="mariadb-account-create-update" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794933 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerName="mariadb-account-create-update" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794959 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-content" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794966 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="extract-content" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.794980 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerName="mariadb-database-create" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.794986 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerName="mariadb-database-create" Jan 21 16:17:04 crc kubenswrapper[4902]: E0121 16:17:04.795001 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795007 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795198 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" containerName="mariadb-database-create" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795212 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" containerName="mariadb-account-create-update" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795231 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="91829544-e720-43f3-b3dd-3f1240beb6f6" containerName="registry-server" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.795917 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.811146 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q7twz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.811271 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.811406 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.949174 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.949340 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:04 crc kubenswrapper[4902]: I0121 16:17:04.949432 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.051336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.051449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.051514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.058303 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.059001 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.080498 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"heat-db-sync-5zjtz\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.131546 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.295035 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:05 crc kubenswrapper[4902]: E0121 16:17:05.295556 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.602469 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:17:05 crc kubenswrapper[4902]: I0121 16:17:05.731467 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerStarted","Data":"f45d4ff100cf62cbe14d607f941d4754955d6a683f1577c68cfa3d2ab9bbff49"} Jan 21 16:17:06 crc kubenswrapper[4902]: I0121 16:17:06.861352 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:17:07 crc kubenswrapper[4902]: I0121 16:17:07.966272 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.036809 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.334892 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.335237 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:08 crc kubenswrapper[4902]: I0121 16:17:08.605806 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:17:09 crc kubenswrapper[4902]: I0121 16:17:09.798741 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqcwn" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" containerID="cri-o://ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630" gracePeriod=2 Jan 21 16:17:10 crc kubenswrapper[4902]: I0121 16:17:10.833006 4902 generic.go:334] "Generic (PLEG): container finished" podID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerID="ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630" exitCode=0 Jan 21 16:17:10 crc kubenswrapper[4902]: I0121 16:17:10.833067 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630"} Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.532921 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.591802 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") pod \"2c558c0f-33c5-4584-b548-fc5af8cee89e\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.591853 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") pod \"2c558c0f-33c5-4584-b548-fc5af8cee89e\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.592083 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") pod \"2c558c0f-33c5-4584-b548-fc5af8cee89e\" (UID: \"2c558c0f-33c5-4584-b548-fc5af8cee89e\") " Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.594057 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities" (OuterVolumeSpecName: "utilities") pod "2c558c0f-33c5-4584-b548-fc5af8cee89e" (UID: "2c558c0f-33c5-4584-b548-fc5af8cee89e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.596754 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79" (OuterVolumeSpecName: "kube-api-access-77h79") pod "2c558c0f-33c5-4584-b548-fc5af8cee89e" (UID: "2c558c0f-33c5-4584-b548-fc5af8cee89e"). InnerVolumeSpecName "kube-api-access-77h79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.694481 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77h79\" (UniqueName: \"kubernetes.io/projected/2c558c0f-33c5-4584-b548-fc5af8cee89e-kube-api-access-77h79\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.694557 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.723948 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c558c0f-33c5-4584-b548-fc5af8cee89e" (UID: "2c558c0f-33c5-4584-b548-fc5af8cee89e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.796277 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c558c0f-33c5-4584-b548-fc5af8cee89e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.860637 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerStarted","Data":"12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89"} Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.863274 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcwn" event={"ID":"2c558c0f-33c5-4584-b548-fc5af8cee89e","Type":"ContainerDied","Data":"dee8922d5520e1f0c611f74ca26c6dc79b0da6d5a6e133ff12dfae78dbe2c30a"} Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.863312 4902 scope.go:117] "RemoveContainer" containerID="ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.863345 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcwn" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.909748 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-5zjtz" podStartSLOduration=2.190239242 podStartE2EDuration="9.909727821s" podCreationTimestamp="2026-01-21 16:17:04 +0000 UTC" firstStartedPulling="2026-01-21 16:17:05.618539994 +0000 UTC m=+6187.695373013" lastFinishedPulling="2026-01-21 16:17:13.338028563 +0000 UTC m=+6195.414861592" observedRunningTime="2026-01-21 16:17:13.894475862 +0000 UTC m=+6195.971308891" watchObservedRunningTime="2026-01-21 16:17:13.909727821 +0000 UTC m=+6195.986560850" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.917447 4902 scope.go:117] "RemoveContainer" containerID="0b15326eb4c064e7851c30f18a27df97dd90b66b41ed4359185f31df9de1590c" Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.949922 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.963650 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqcwn"] Jan 21 16:17:13 crc kubenswrapper[4902]: I0121 16:17:13.990062 4902 scope.go:117] "RemoveContainer" containerID="3978acdb017791c813d4f5337aced828704d5e523e45d415b1601e3ec73ed790" Jan 21 16:17:14 crc kubenswrapper[4902]: I0121 16:17:14.310189 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" path="/var/lib/kubelet/pods/2c558c0f-33c5-4584-b548-fc5af8cee89e/volumes" Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.861590 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7dd785d478-plbs7" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.862558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.904744 4902 generic.go:334] "Generic (PLEG): container finished" podID="b1a02641-de79-49cd-91a4-d689c669a38c" containerID="12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89" exitCode=0 Jan 21 16:17:16 crc kubenswrapper[4902]: I0121 16:17:16.904810 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerDied","Data":"12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89"} Jan 21 16:17:17 crc kubenswrapper[4902]: I0121 16:17:17.295251 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:17 crc kubenswrapper[4902]: E0121 16:17:17.295823 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.321286 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.493249 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") pod \"b1a02641-de79-49cd-91a4-d689c669a38c\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.493587 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") pod \"b1a02641-de79-49cd-91a4-d689c669a38c\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.493684 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") pod \"b1a02641-de79-49cd-91a4-d689c669a38c\" (UID: \"b1a02641-de79-49cd-91a4-d689c669a38c\") " Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.499035 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr" (OuterVolumeSpecName: "kube-api-access-hxmnr") pod "b1a02641-de79-49cd-91a4-d689c669a38c" (UID: "b1a02641-de79-49cd-91a4-d689c669a38c"). InnerVolumeSpecName "kube-api-access-hxmnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.525273 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1a02641-de79-49cd-91a4-d689c669a38c" (UID: "b1a02641-de79-49cd-91a4-d689c669a38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.562741 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data" (OuterVolumeSpecName: "config-data") pod "b1a02641-de79-49cd-91a4-d689c669a38c" (UID: "b1a02641-de79-49cd-91a4-d689c669a38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.595285 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.595309 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxmnr\" (UniqueName: \"kubernetes.io/projected/b1a02641-de79-49cd-91a4-d689c669a38c-kube-api-access-hxmnr\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.595319 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a02641-de79-49cd-91a4-d689c669a38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.926389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-5zjtz" event={"ID":"b1a02641-de79-49cd-91a4-d689c669a38c","Type":"ContainerDied","Data":"f45d4ff100cf62cbe14d607f941d4754955d6a683f1577c68cfa3d2ab9bbff49"} Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.926430 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f45d4ff100cf62cbe14d607f941d4754955d6a683f1577c68cfa3d2ab9bbff49" Jan 21 16:17:18 crc kubenswrapper[4902]: I0121 16:17:18.926469 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-5zjtz" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.654999 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655826 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-utilities" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655846 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-utilities" Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655880 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-content" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655889 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="extract-content" Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655903 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655911 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" Jan 21 16:17:20 crc kubenswrapper[4902]: E0121 16:17:20.655930 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" containerName="heat-db-sync" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.655937 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" containerName="heat-db-sync" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.656183 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c558c0f-33c5-4584-b548-fc5af8cee89e" containerName="registry-server" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.656201 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" containerName="heat-db-sync" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.657100 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.662326 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.667815 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q7twz" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.677444 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.698633 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.722637 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.762074 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.763805 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.767987 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782567 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782682 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782798 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.782834 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783178 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783239 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783282 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.783437 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.789879 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.848734 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.850420 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.854832 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.862217 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886388 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886527 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886576 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886632 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886656 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886735 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886820 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886869 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.886918 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.887126 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.887155 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.887242 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.895113 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.895869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.896652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.898334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.904421 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.913392 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.914875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"heat-engine-77695bdf6-844ml\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.917304 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"heat-api-cf8444c78-xmqt2\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.989913 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.990011 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.990102 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.990154 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:20 crc kubenswrapper[4902]: I0121 16:17:20.993294 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:21 crc kubenswrapper[4902]: W0121 16:17:21.013502 4902 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a02641_de79_49cd_91a4_d689c669a38c.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a02641_de79_49cd_91a4_d689c669a38c.slice: no such file or directory Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.024170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.024840 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.028986 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.048940 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"heat-cfnapi-65bd9b7448-nvhqd\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.110083 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.171557 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:21 crc kubenswrapper[4902]: E0121 16:17:21.392257 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-conmon-92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ff7ce_f44c_45d2_ac7c_ddebb604798c.slice/crio-4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice/crio-ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice/crio-8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice/crio-dee8922d5520e1f0c611f74ca26c6dc79b0da6d5a6e133ff12dfae78dbe2c30a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice/crio-conmon-745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice/crio-a98983d796b14519949744b954be43459febb454e85432558a7cb24d6e5aa795\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b8ff7ce_f44c_45d2_ac7c_ddebb604798c.slice/crio-conmon-4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-ff69022f1f995adb49589ec367e0815a70cf064bef008a8fa059b4c649c81ff9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c558c0f_33c5_4584_b548_fc5af8cee89e.slice/crio-conmon-ef3174310ae77ae7733c59eda9f2154edea9a69da8c15f2f7b007132379ea630.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91829544_e720_43f3_b3dd_3f1240beb6f6.slice/crio-92ed89e7ce58e9bbba19fc1558c780f87d905abf308b061c101efd0ab4c22feb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217952b8_c6e3_44ba_b5f2_dabc3dfa9b1c.slice/crio-745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice/crio-8796f7c4e100512924c5fa2afe8b23fc4c173a893529ebcc3ca8923c22390fcb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff9e17b7_5e08_4042_9b1b_ccad64651eef.slice/crio-conmon-8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.646759 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:21 crc kubenswrapper[4902]: W0121 16:17:21.654322 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26cf64d9_8389_473d_a51f_2ca282b5787f.slice/crio-cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133 WatchSource:0}: Error finding container cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133: Status 404 returned error can't find the container with id cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133 Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.747334 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.918828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.918885 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.918979 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919138 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919213 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.919240 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") pod \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\" (UID: \"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c\") " Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.923593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs" (OuterVolumeSpecName: "logs") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.934690 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh" (OuterVolumeSpecName: "kube-api-access-vbsdh") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "kube-api-access-vbsdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.935329 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.939064 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.958921 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts" (OuterVolumeSpecName: "scripts") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: W0121 16:17:21.960286 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67ffd84_72d3_4d63_b99a_0fe8ebe12753.slice/crio-c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd WatchSource:0}: Error finding container c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd: Status 404 returned error can't find the container with id c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964537 4902 generic.go:334] "Generic (PLEG): container finished" podID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" exitCode=137 Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964630 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerDied","Data":"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964662 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd785d478-plbs7" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964670 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd785d478-plbs7" event={"ID":"3b8ff7ce-f44c-45d2-ac7c-ddebb604798c","Type":"ContainerDied","Data":"4686f3e06661bcfec8e992129cddce590c37e819ce3cb3dd7fbf805003f4c581"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.964694 4902 scope.go:117] "RemoveContainer" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.977271 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.982617 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerStarted","Data":"c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.982655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerStarted","Data":"cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133"} Jan 21 16:17:21 crc kubenswrapper[4902]: I0121 16:17:21.983747 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.005999 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data" (OuterVolumeSpecName: "config-data") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.006961 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" (UID: "3b8ff7ce-f44c-45d2-ac7c-ddebb604798c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.007488 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-77695bdf6-844ml" podStartSLOduration=2.007469935 podStartE2EDuration="2.007469935s" podCreationTimestamp="2026-01-21 16:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:22.004626885 +0000 UTC m=+6204.081459924" watchObservedRunningTime="2026-01-21 16:17:22.007469935 +0000 UTC m=+6204.084302964" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024159 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024186 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024197 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024229 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024239 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024248 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbsdh\" (UniqueName: \"kubernetes.io/projected/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-kube-api-access-vbsdh\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.024257 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.114033 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.255457 4902 scope.go:117] "RemoveContainer" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.320472 4902 scope.go:117] "RemoveContainer" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" Jan 21 16:17:22 crc kubenswrapper[4902]: E0121 16:17:22.321145 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b\": container with ID starting with f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b not found: ID does not exist" containerID="f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.321255 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b"} err="failed to get container status \"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b\": rpc error: code = NotFound desc = could not find container \"f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b\": container with ID starting with f707b6944bc0d8ced6b5b35ddb4238c341f47e9b494312a65b3d4698931ef54b not found: ID does not exist" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.321362 4902 scope.go:117] "RemoveContainer" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" Jan 21 16:17:22 crc kubenswrapper[4902]: E0121 16:17:22.325966 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f\": container with ID starting with 4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f not found: ID does not exist" containerID="4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.326010 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f"} err="failed to get container status \"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f\": rpc error: code = NotFound desc = could not find container \"4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f\": container with ID starting with 4db0313958c94d65da2ff361b65c0e54615f1c9e9602dcbe34319f3a83f02e7f not found: ID does not exist" Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.328800 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:17:22 crc kubenswrapper[4902]: I0121 16:17:22.336002 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7dd785d478-plbs7"] Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.000178 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerStarted","Data":"8ddcaecce27ce57414d483ff30439ab5d5ef54218cadf2c9d61387f3947e594b"} Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.006413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerStarted","Data":"c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd"} Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.437023 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6845bd7746-jd2dk" Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.498860 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.499332 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" containerID="cri-o://b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf" gracePeriod=30 Jan 21 16:17:23 crc kubenswrapper[4902]: I0121 16:17:23.499195 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" containerID="cri-o://dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd" gracePeriod=30 Jan 21 16:17:24 crc kubenswrapper[4902]: I0121 16:17:24.310374 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" path="/var/lib/kubelet/pods/3b8ff7ce-f44c-45d2-ac7c-ddebb604798c/volumes" Jan 21 16:17:25 crc kubenswrapper[4902]: I0121 16:17:25.225833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerStarted","Data":"3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f"} Jan 21 16:17:25 crc kubenswrapper[4902]: I0121 16:17:25.226615 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:25 crc kubenswrapper[4902]: I0121 16:17:25.245876 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podStartSLOduration=3.161307694 podStartE2EDuration="5.245848544s" podCreationTimestamp="2026-01-21 16:17:20 +0000 UTC" firstStartedPulling="2026-01-21 16:17:22.266492284 +0000 UTC m=+6204.343325313" lastFinishedPulling="2026-01-21 16:17:24.351033124 +0000 UTC m=+6206.427866163" observedRunningTime="2026-01-21 16:17:25.242744947 +0000 UTC m=+6207.319577976" watchObservedRunningTime="2026-01-21 16:17:25.245848544 +0000 UTC m=+6207.322681573" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.245549 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerStarted","Data":"94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd"} Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.245823 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.301619 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cf8444c78-xmqt2" podStartSLOduration=2.812518279 podStartE2EDuration="6.301588813s" podCreationTimestamp="2026-01-21 16:17:20 +0000 UTC" firstStartedPulling="2026-01-21 16:17:21.962453638 +0000 UTC m=+6204.039286667" lastFinishedPulling="2026-01-21 16:17:25.451524172 +0000 UTC m=+6207.528357201" observedRunningTime="2026-01-21 16:17:26.277568407 +0000 UTC m=+6208.354401466" watchObservedRunningTime="2026-01-21 16:17:26.301588813 +0000 UTC m=+6208.378421862" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.407298 4902 scope.go:117] "RemoveContainer" containerID="65fe44f3b0e17d56dbcc24184af4bec7f8662c78351c1314a8a65ecfa5dbb257" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.525892 4902 scope.go:117] "RemoveContainer" containerID="f86be6b9f95f42ed7575c1db6ca5d50d96cc6520921a01dd5dff53f1cdbb4ae8" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.589990 4902 scope.go:117] "RemoveContainer" containerID="371e1a26e1a76ba398e48c1e98072317dd29a6c8abf9e8ab60b15d658481161c" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.623245 4902 scope.go:117] "RemoveContainer" containerID="efad9d3030aa3752d324b9640e74fe010cdfafc51d4ab887dfdd4055c1f6fa5a" Jan 21 16:17:26 crc kubenswrapper[4902]: I0121 16:17:26.950025 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.253967 4902 generic.go:334] "Generic (PLEG): container finished" podID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerID="b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf" exitCode=0 Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.254028 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerDied","Data":"b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf"} Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.873486 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-68647965fb-5bvjr"] Jan 21 16:17:27 crc kubenswrapper[4902]: E0121 16:17:27.874099 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874114 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" Jan 21 16:17:27 crc kubenswrapper[4902]: E0121 16:17:27.874145 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874152 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874332 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon-log" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.874354 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8ff7ce-f44c-45d2-ac7c-ddebb604798c" containerName="horizon" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.875010 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.889253 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.890820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.899967 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.901745 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:27 crc kubenswrapper[4902]: I0121 16:17:27.920394 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68647965fb-5bvjr"] Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:27.928932 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256770 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256886 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.256975 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krptw\" (UniqueName: \"kubernetes.io/projected/bb701a34-be50-44cd-b277-b687e8499664-kube-api-access-krptw\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.257007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-combined-ca-bundle\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.257028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data-custom\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.258930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.258964 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259004 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259066 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.259176 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.299563 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:28 crc kubenswrapper[4902]: E0121 16:17:28.299817 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.362916 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krptw\" (UniqueName: \"kubernetes.io/projected/bb701a34-be50-44cd-b277-b687e8499664-kube-api-access-krptw\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.362998 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-combined-ca-bundle\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363022 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data-custom\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363097 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363113 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363148 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363188 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363302 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363365 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363539 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.363590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.370390 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.371382 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.371672 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.373291 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.375502 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.381583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-combined-ca-bundle\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.381951 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb701a34-be50-44cd-b277-b687e8499664-config-data-custom\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.382492 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.386390 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.397007 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"heat-api-575c784d98-scqmc\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.406834 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.407112 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krptw\" (UniqueName: \"kubernetes.io/projected/bb701a34-be50-44cd-b277-b687e8499664-kube-api-access-krptw\") pod \"heat-engine-68647965fb-5bvjr\" (UID: \"bb701a34-be50-44cd-b277-b687e8499664\") " pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.407301 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"heat-cfnapi-74df5fd5cf-g8qhb\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.496710 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.628091 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:28 crc kubenswrapper[4902]: I0121 16:17:28.661693 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.235607 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68647965fb-5bvjr"] Jan 21 16:17:29 crc kubenswrapper[4902]: W0121 16:17:29.252641 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb701a34_be50_44cd_b277_b687e8499664.slice/crio-aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015 WatchSource:0}: Error finding container aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015: Status 404 returned error can't find the container with id aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015 Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.328524 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68647965fb-5bvjr" event={"ID":"bb701a34-be50-44cd-b277-b687e8499664","Type":"ContainerStarted","Data":"aadd5a2a19662f4118bce40a15337299b79d1652cfab8ff90921ae10164fa015"} Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.440566 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.560328 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.662865 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.663086 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-cf8444c78-xmqt2" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" containerID="cri-o://94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd" gracePeriod=60 Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.759650 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.760128 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" containerID="cri-o://3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f" gracePeriod=60 Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.801325 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.123:8000/healthcheck\": EOF" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.828808 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-575dc5884b-mwxz4"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.833829 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.840546 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.840736 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.903413 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575dc5884b-mwxz4"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.926550 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5c8d887b44-lnw77"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.928325 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.932469 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.932528 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.946288 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c8d887b44-lnw77"] Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.950622 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data-custom\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.952398 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7zv\" (UniqueName: \"kubernetes.io/projected/9bfec31e-5cec-4820-9f26-34413330e44c-kube-api-access-4k7zv\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.952655 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-internal-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.953170 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.953319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-public-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:29 crc kubenswrapper[4902]: I0121 16:17:29.953487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-combined-ca-bundle\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.054926 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data-custom\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.054977 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7zv\" (UniqueName: \"kubernetes.io/projected/9bfec31e-5cec-4820-9f26-34413330e44c-kube-api-access-4k7zv\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055019 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-internal-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055080 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055112 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055150 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-public-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055177 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-internal-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055211 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-combined-ca-bundle\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055243 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g462c\" (UniqueName: \"kubernetes.io/projected/5acd47b5-1a65-41c3-af06-401bd9880c1f-kube-api-access-g462c\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055262 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-public-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055287 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-combined-ca-bundle\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.055329 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data-custom\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.060925 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-internal-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.061146 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-combined-ca-bundle\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.062170 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-public-tls-certs\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.062961 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.063725 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bfec31e-5cec-4820-9f26-34413330e44c-config-data-custom\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.078451 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7zv\" (UniqueName: \"kubernetes.io/projected/9bfec31e-5cec-4820-9f26-34413330e44c-kube-api-access-4k7zv\") pod \"heat-api-575dc5884b-mwxz4\" (UID: \"9bfec31e-5cec-4820-9f26-34413330e44c\") " pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157314 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157437 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-internal-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157537 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g462c\" (UniqueName: \"kubernetes.io/projected/5acd47b5-1a65-41c3-af06-401bd9880c1f-kube-api-access-g462c\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157573 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-public-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-combined-ca-bundle\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.157664 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data-custom\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.165017 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-internal-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.165666 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data-custom\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.165827 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-public-tls-certs\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.167204 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-config-data\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.167670 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5acd47b5-1a65-41c3-af06-401bd9880c1f-combined-ca-bundle\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.177773 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g462c\" (UniqueName: \"kubernetes.io/projected/5acd47b5-1a65-41c3-af06-401bd9880c1f-kube-api-access-g462c\") pod \"heat-cfnapi-5c8d887b44-lnw77\" (UID: \"5acd47b5-1a65-41c3-af06-401bd9880c1f\") " pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.218426 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.254786 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.344590 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68647965fb-5bvjr" event={"ID":"bb701a34-be50-44cd-b277-b687e8499664","Type":"ContainerStarted","Data":"fdd548492b6e40f80b093141b68efce769abae41b8a31f386ea29bbd895d7193"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.346488 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.362742 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerID="15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c" exitCode=1 Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.362813 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerDied","Data":"15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.362844 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerStarted","Data":"3cccd66a946a157a90314b5d37ad28c489b5d8240c4b1c57e8f9ef0b761138da"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.363234 4902 scope.go:117] "RemoveContainer" containerID="15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.375372 4902 generic.go:334] "Generic (PLEG): container finished" podID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerID="f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5" exitCode=1 Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.375463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerDied","Data":"f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.375490 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerStarted","Data":"9bd457e7b58a6be152461f46b27b4879936f13cc46a82ddd9cea36267b98f5de"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.376113 4902 scope.go:117] "RemoveContainer" containerID="f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.393273 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-68647965fb-5bvjr" podStartSLOduration=3.393245543 podStartE2EDuration="3.393245543s" podCreationTimestamp="2026-01-21 16:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:30.371536012 +0000 UTC m=+6212.448369041" watchObservedRunningTime="2026-01-21 16:17:30.393245543 +0000 UTC m=+6212.470078572" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.400187 4902 generic.go:334] "Generic (PLEG): container finished" podID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerID="94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd" exitCode=0 Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.400227 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerDied","Data":"94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd"} Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.741442 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783705 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783828 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783881 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.783902 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") pod \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\" (UID: \"a67ffd84-72d3-4d63-b99a-0fe8ebe12753\") " Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.800162 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.811328 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98" (OuterVolumeSpecName: "kube-api-access-5vx98") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "kube-api-access-5vx98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.843482 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.892250 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.892286 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.892298 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vx98\" (UniqueName: \"kubernetes.io/projected/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-kube-api-access-5vx98\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4902]: I0121 16:17:30.954126 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data" (OuterVolumeSpecName: "config-data") pod "a67ffd84-72d3-4d63-b99a-0fe8ebe12753" (UID: "a67ffd84-72d3-4d63-b99a-0fe8ebe12753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.000657 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a67ffd84-72d3-4d63-b99a-0fe8ebe12753-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.057243 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c8d887b44-lnw77"] Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.063813 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-575dc5884b-mwxz4"] Jan 21 16:17:31 crc kubenswrapper[4902]: W0121 16:17:31.124673 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5acd47b5_1a65_41c3_af06_401bd9880c1f.slice/crio-7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9 WatchSource:0}: Error finding container 7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9: Status 404 returned error can't find the container with id 7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9 Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.210234 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.431845 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" event={"ID":"5acd47b5-1a65-41c3-af06-401bd9880c1f","Type":"ContainerStarted","Data":"7eda3ce01e8c2937250dbb787105cd4df25d2e87bcc76612a88e447c1a78fef9"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.446831 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerStarted","Data":"e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.446899 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.468749 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-575c784d98-scqmc" podStartSLOduration=4.468714368 podStartE2EDuration="4.468714368s" podCreationTimestamp="2026-01-21 16:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:31.461882955 +0000 UTC m=+6213.538715984" watchObservedRunningTime="2026-01-21 16:17:31.468714368 +0000 UTC m=+6213.545547387" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.478118 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerStarted","Data":"0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.478339 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.479015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575dc5884b-mwxz4" event={"ID":"9bfec31e-5cec-4820-9f26-34413330e44c","Type":"ContainerStarted","Data":"6621e58a027de94e58b1552acf55b8206233929ba4aaebc3504289a32afdb2f1"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.486388 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cf8444c78-xmqt2" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.487331 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cf8444c78-xmqt2" event={"ID":"a67ffd84-72d3-4d63-b99a-0fe8ebe12753","Type":"ContainerDied","Data":"c268dfd4954d075a28333859e3f54fce320b71b326194e4f9bbadd0bffc420fd"} Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.487383 4902 scope.go:117] "RemoveContainer" containerID="94a0a3d3b10817a020a3d5800679a614ff1fe2e0c400b8287a07dd2afb859acd" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.500190 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podStartSLOduration=4.500167713 podStartE2EDuration="4.500167713s" podCreationTimestamp="2026-01-21 16:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:31.498529757 +0000 UTC m=+6213.575362786" watchObservedRunningTime="2026-01-21 16:17:31.500167713 +0000 UTC m=+6213.577000742" Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.555928 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:31 crc kubenswrapper[4902]: I0121 16:17:31.565745 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-cf8444c78-xmqt2"] Jan 21 16:17:31 crc kubenswrapper[4902]: E0121 16:17:31.730445 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ae4f0a_2614_4689_83ae_4cef7ae1df9d.slice/crio-e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda67ffd84_72d3_4d63_b99a_0fe8ebe12753.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.305511 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" path="/var/lib/kubelet/pods/a67ffd84-72d3-4d63-b99a-0fe8ebe12753/volumes" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.503452 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" event={"ID":"5acd47b5-1a65-41c3-af06-401bd9880c1f","Type":"ContainerStarted","Data":"461993387b5608976e6e2282fd1b390053faa19bbea7f9d5be7628219eacf786"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.503643 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507186 4902 generic.go:334] "Generic (PLEG): container finished" podID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" exitCode=1 Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507249 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerDied","Data":"e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507280 4902 scope.go:117] "RemoveContainer" containerID="15c4f131c64554319e2cd62000d98288a117ca09d8376d69bd4cecdd1964137c" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.507712 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:32 crc kubenswrapper[4902]: E0121 16:17:32.507953 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-575c784d98-scqmc_openstack(c6ae4f0a-2614-4689-83ae-4cef7ae1df9d)\"" pod="openstack/heat-api-575c784d98-scqmc" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.522421 4902 generic.go:334] "Generic (PLEG): container finished" podID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" exitCode=1 Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.522505 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerDied","Data":"0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.523344 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:32 crc kubenswrapper[4902]: E0121 16:17:32.523676 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-74df5fd5cf-g8qhb_openstack(f471277e-f0f2-4a10-8234-ed5c3256c82a)\"" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.527509 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" podStartSLOduration=3.527490882 podStartE2EDuration="3.527490882s" podCreationTimestamp="2026-01-21 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:32.524663413 +0000 UTC m=+6214.601496472" watchObservedRunningTime="2026-01-21 16:17:32.527490882 +0000 UTC m=+6214.604323911" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.531216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575dc5884b-mwxz4" event={"ID":"9bfec31e-5cec-4820-9f26-34413330e44c","Type":"ContainerStarted","Data":"96c0755c729932ca2fbeba1d6f3b1d8d55971c0fa7b2ac7233ae108743ce654b"} Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.531431 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.595916 4902 scope.go:117] "RemoveContainer" containerID="f3c2f8d67445ae1d14e1a237160100d289da6488adf80971ccb697a8822a9fb5" Jan 21 16:17:32 crc kubenswrapper[4902]: I0121 16:17:32.596510 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-575dc5884b-mwxz4" podStartSLOduration=3.5962742580000002 podStartE2EDuration="3.596274258s" podCreationTimestamp="2026-01-21 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:32.587069299 +0000 UTC m=+6214.663902328" watchObservedRunningTime="2026-01-21 16:17:32.596274258 +0000 UTC m=+6214.673107277" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.541801 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:33 crc kubenswrapper[4902]: E0121 16:17:33.542512 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-74df5fd5cf-g8qhb_openstack(f471277e-f0f2-4a10-8234-ed5c3256c82a)\"" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.543636 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:33 crc kubenswrapper[4902]: E0121 16:17:33.543898 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-575c784d98-scqmc_openstack(c6ae4f0a-2614-4689-83ae-4cef7ae1df9d)\"" pod="openstack/heat-api-575c784d98-scqmc" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.629305 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:33 crc kubenswrapper[4902]: I0121 16:17:33.661789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:34 crc kubenswrapper[4902]: I0121 16:17:34.551769 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:34 crc kubenswrapper[4902]: I0121 16:17:34.551865 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:34 crc kubenswrapper[4902]: E0121 16:17:34.552166 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-575c784d98-scqmc_openstack(c6ae4f0a-2614-4689-83ae-4cef7ae1df9d)\"" pod="openstack/heat-api-575c784d98-scqmc" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" Jan 21 16:17:34 crc kubenswrapper[4902]: E0121 16:17:34.552208 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-74df5fd5cf-g8qhb_openstack(f471277e-f0f2-4a10-8234-ed5c3256c82a)\"" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.192605 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.123:8000/healthcheck\": read tcp 10.217.0.2:38624->10.217.1.123:8000: read: connection reset by peer" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.587140 4902 generic.go:334] "Generic (PLEG): container finished" podID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerID="3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f" exitCode=0 Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.587211 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerDied","Data":"3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f"} Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.690201 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810135 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810233 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810299 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.810418 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") pod \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\" (UID: \"b55674f9-c7ae-4344-979f-d80fc2d0e03b\") " Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.816795 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4" (OuterVolumeSpecName: "kube-api-access-lmrh4") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "kube-api-access-lmrh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.824466 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.843079 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.880900 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data" (OuterVolumeSpecName: "config-data") pod "b55674f9-c7ae-4344-979f-d80fc2d0e03b" (UID: "b55674f9-c7ae-4344-979f-d80fc2d0e03b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918290 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmrh4\" (UniqueName: \"kubernetes.io/projected/b55674f9-c7ae-4344-979f-d80fc2d0e03b-kube-api-access-lmrh4\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918390 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918406 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:35 crc kubenswrapper[4902]: I0121 16:17:35.918418 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55674f9-c7ae-4344-979f-d80fc2d0e03b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.601215 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.601269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65bd9b7448-nvhqd" event={"ID":"b55674f9-c7ae-4344-979f-d80fc2d0e03b","Type":"ContainerDied","Data":"8ddcaecce27ce57414d483ff30439ab5d5ef54218cadf2c9d61387f3947e594b"} Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.601352 4902 scope.go:117] "RemoveContainer" containerID="3a0cbf96360641a1be4d26f0628bce956b77ed886ce1151ffc5559087263908f" Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.630111 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.639114 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-65bd9b7448-nvhqd"] Jan 21 16:17:36 crc kubenswrapper[4902]: I0121 16:17:36.949553 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:17:38 crc kubenswrapper[4902]: I0121 16:17:38.306842 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" path="/var/lib/kubelet/pods/b55674f9-c7ae-4344-979f-d80fc2d0e03b/volumes" Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.582017 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-575dc5884b-mwxz4" Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.665636 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.796147 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5c8d887b44-lnw77" Jan 21 16:17:41 crc kubenswrapper[4902]: I0121 16:17:41.867859 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.178890 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270130 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270184 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270322 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.270567 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") pod \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\" (UID: \"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.276915 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l" (OuterVolumeSpecName: "kube-api-access-pw66l") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "kube-api-access-pw66l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.277105 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.303833 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.344595 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data" (OuterVolumeSpecName: "config-data") pod "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" (UID: "c6ae4f0a-2614-4689-83ae-4cef7ae1df9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373494 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw66l\" (UniqueName: \"kubernetes.io/projected/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-kube-api-access-pw66l\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373525 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373535 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.373544 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.391242 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475493 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475652 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475867 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.475906 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") pod \"f471277e-f0f2-4a10-8234-ed5c3256c82a\" (UID: \"f471277e-f0f2-4a10-8234-ed5c3256c82a\") " Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.484945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7" (OuterVolumeSpecName: "kube-api-access-m89x7") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "kube-api-access-m89x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.485441 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.508745 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.529138 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data" (OuterVolumeSpecName: "config-data") pod "f471277e-f0f2-4a10-8234-ed5c3256c82a" (UID: "f471277e-f0f2-4a10-8234-ed5c3256c82a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578841 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578872 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578886 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m89x7\" (UniqueName: \"kubernetes.io/projected/f471277e-f0f2-4a10-8234-ed5c3256c82a-kube-api-access-m89x7\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.578896 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f471277e-f0f2-4a10-8234-ed5c3256c82a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.729030 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-575c784d98-scqmc" event={"ID":"c6ae4f0a-2614-4689-83ae-4cef7ae1df9d","Type":"ContainerDied","Data":"3cccd66a946a157a90314b5d37ad28c489b5d8240c4b1c57e8f9ef0b761138da"} Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.729401 4902 scope.go:117] "RemoveContainer" containerID="e99be572b5f0de08db1d4011d3d0be2733c9e9eaf11a42e426839109762bfc6f" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.729081 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-575c784d98-scqmc" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.731334 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" event={"ID":"f471277e-f0f2-4a10-8234-ed5c3256c82a","Type":"ContainerDied","Data":"9bd457e7b58a6be152461f46b27b4879936f13cc46a82ddd9cea36267b98f5de"} Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.731362 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74df5fd5cf-g8qhb" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.754814 4902 scope.go:117] "RemoveContainer" containerID="0a3cd4119f6b93ca8c960d0e547537c8f6b4c8002580608a93d62f7ad2909dea" Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.777634 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.787528 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-575c784d98-scqmc"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.798758 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:42 crc kubenswrapper[4902]: I0121 16:17:42.807315 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74df5fd5cf-g8qhb"] Jan 21 16:17:43 crc kubenswrapper[4902]: I0121 16:17:43.294772 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:43 crc kubenswrapper[4902]: E0121 16:17:43.295103 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:44 crc kubenswrapper[4902]: I0121 16:17:44.306929 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" path="/var/lib/kubelet/pods/c6ae4f0a-2614-4689-83ae-4cef7ae1df9d/volumes" Jan 21 16:17:44 crc kubenswrapper[4902]: I0121 16:17:44.308779 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" path="/var/lib/kubelet/pods/f471277e-f0f2-4a10-8234-ed5c3256c82a/volumes" Jan 21 16:17:46 crc kubenswrapper[4902]: I0121 16:17:46.950095 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-786f96566b-w596t" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.111:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.111:8443: connect: connection refused" Jan 21 16:17:46 crc kubenswrapper[4902]: I0121 16:17:46.950504 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:17:48 crc kubenswrapper[4902]: I0121 16:17:48.525569 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-68647965fb-5bvjr" Jan 21 16:17:48 crc kubenswrapper[4902]: I0121 16:17:48.573217 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:48 crc kubenswrapper[4902]: I0121 16:17:48.573437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-77695bdf6-844ml" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" containerID="cri-o://c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" gracePeriod=60 Jan 21 16:17:50 crc kubenswrapper[4902]: E0121 16:17:50.999555 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 16:17:51 crc kubenswrapper[4902]: E0121 16:17:51.001733 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 16:17:51 crc kubenswrapper[4902]: E0121 16:17:51.003368 4902 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 16:17:51 crc kubenswrapper[4902]: E0121 16:17:51.003406 4902 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-77695bdf6-844ml" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.864557 4902 generic.go:334] "Generic (PLEG): container finished" podID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerID="dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd" exitCode=137 Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.866308 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerDied","Data":"dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd"} Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.866355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786f96566b-w596t" event={"ID":"b772cd9d-83ce-4675-84de-09f40bdcabe3","Type":"ContainerDied","Data":"3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd"} Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.866369 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b11c2a3c705c7354a42e8da86cbb16b0b2109f8538bbf991bf39e340b5a23cd" Jan 21 16:17:53 crc kubenswrapper[4902]: I0121 16:17:53.955026 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032164 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032333 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032456 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032503 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032564 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032867 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032890 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs" (OuterVolumeSpecName: "logs") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.032917 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") pod \"b772cd9d-83ce-4675-84de-09f40bdcabe3\" (UID: \"b772cd9d-83ce-4675-84de-09f40bdcabe3\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.033744 4902 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b772cd9d-83ce-4675-84de-09f40bdcabe3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.040362 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.040524 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c" (OuterVolumeSpecName: "kube-api-access-hbc7c") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "kube-api-access-hbc7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.064733 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts" (OuterVolumeSpecName: "scripts") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.075704 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data" (OuterVolumeSpecName: "config-data") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.080437 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.107119 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b772cd9d-83ce-4675-84de-09f40bdcabe3" (UID: "b772cd9d-83ce-4675-84de-09f40bdcabe3"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135344 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135372 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135381 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135389 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbc7c\" (UniqueName: \"kubernetes.io/projected/b772cd9d-83ce-4675-84de-09f40bdcabe3-kube-api-access-hbc7c\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135401 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b772cd9d-83ce-4675-84de-09f40bdcabe3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.135410 4902 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b772cd9d-83ce-4675-84de-09f40bdcabe3-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.300494 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:17:54 crc kubenswrapper[4902]: E0121 16:17:54.300728 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.877252 4902 generic.go:334] "Generic (PLEG): container finished" podID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" exitCode=0 Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.877415 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerDied","Data":"c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48"} Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.878385 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77695bdf6-844ml" event={"ID":"26cf64d9-8389-473d-a51f-2ca282b5787f","Type":"ContainerDied","Data":"cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133"} Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.878437 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb43a5fb250f0e18d67e65759c41da1b4870b16bb8305d85b3eb43efb5279133" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.878572 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786f96566b-w596t" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.915541 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.935378 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951128 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-786f96566b-w596t"] Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951772 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951831 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.951971 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.952133 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") pod \"26cf64d9-8389-473d-a51f-2ca282b5787f\" (UID: \"26cf64d9-8389-473d-a51f-2ca282b5787f\") " Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.956375 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.956990 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw" (OuterVolumeSpecName: "kube-api-access-tv2sw") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "kube-api-access-tv2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:54 crc kubenswrapper[4902]: I0121 16:17:54.986952 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.001898 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data" (OuterVolumeSpecName: "config-data") pod "26cf64d9-8389-473d-a51f-2ca282b5787f" (UID: "26cf64d9-8389-473d-a51f-2ca282b5787f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054348 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054380 4902 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054390 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv2sw\" (UniqueName: \"kubernetes.io/projected/26cf64d9-8389-473d-a51f-2ca282b5787f-kube-api-access-tv2sw\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.054400 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cf64d9-8389-473d-a51f-2ca282b5787f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.886710 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77695bdf6-844ml" Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.917494 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:55 crc kubenswrapper[4902]: I0121 16:17:55.927495 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-77695bdf6-844ml"] Jan 21 16:17:56 crc kubenswrapper[4902]: I0121 16:17:56.309328 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" path="/var/lib/kubelet/pods/26cf64d9-8389-473d-a51f-2ca282b5787f/volumes" Jan 21 16:17:56 crc kubenswrapper[4902]: I0121 16:17:56.310389 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" path="/var/lib/kubelet/pods/b772cd9d-83ce-4675-84de-09f40bdcabe3/volumes" Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.048896 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.056900 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5eaa-account-create-update-6b2pj"] Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.066682 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:18:05 crc kubenswrapper[4902]: I0121 16:18:05.074699 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nh5zs"] Jan 21 16:18:06 crc kubenswrapper[4902]: I0121 16:18:06.311995 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316e80e8-1286-4be7-b686-90693f8e7c95" path="/var/lib/kubelet/pods/316e80e8-1286-4be7-b686-90693f8e7c95/volumes" Jan 21 16:18:06 crc kubenswrapper[4902]: I0121 16:18:06.312773 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d97084-2d8b-44c2-877e-b09211b7d84d" path="/var/lib/kubelet/pods/d8d97084-2d8b-44c2-877e-b09211b7d84d/volumes" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.305518 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.306264 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.633921 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn"] Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634673 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634696 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634711 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634723 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634737 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634745 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634756 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634765 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634779 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634788 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634815 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634824 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634838 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634846 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634865 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634874 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: E0121 16:18:08.634889 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.634897 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635134 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635148 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cf64d9-8389-473d-a51f-2ca282b5787f" containerName="heat-engine" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635159 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635166 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55674f9-c7ae-4344-979f-d80fc2d0e03b" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635177 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772cd9d-83ce-4675-84de-09f40bdcabe3" containerName="horizon-log" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635184 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f471277e-f0f2-4a10-8234-ed5c3256c82a" containerName="heat-cfnapi" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635191 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635204 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67ffd84-72d3-4d63-b99a-0fe8ebe12753" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.635212 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ae4f0a-2614-4689-83ae-4cef7ae1df9d" containerName="heat-api" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.636607 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.638947 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.645605 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn"] Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.714523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.714604 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.715041 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.816794 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.816880 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.816927 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.817577 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.817678 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.849919 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:08 crc kubenswrapper[4902]: I0121 16:18:08.965674 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:09 crc kubenswrapper[4902]: I0121 16:18:09.433218 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn"] Jan 21 16:18:10 crc kubenswrapper[4902]: I0121 16:18:10.025720 4902 generic.go:334] "Generic (PLEG): container finished" podID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerID="0b00fe8c5f033dc6351be1dde481bb8d45c0867e594bdad310b21aa2f63223d4" exitCode=0 Jan 21 16:18:10 crc kubenswrapper[4902]: I0121 16:18:10.025930 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"0b00fe8c5f033dc6351be1dde481bb8d45c0867e594bdad310b21aa2f63223d4"} Jan 21 16:18:10 crc kubenswrapper[4902]: I0121 16:18:10.026065 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerStarted","Data":"78d219e3bb4b4d28dfde0cb06bec377b18c0b547c9f904dee848391bb2a7d6f8"} Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.052981 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.061663 4902 generic.go:334] "Generic (PLEG): container finished" podID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerID="862bb0aa13c9e2f00e09139d82c13884734dfb5915760790c082c3ccda69f0a6" exitCode=0 Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.061725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"862bb0aa13c9e2f00e09139d82c13884734dfb5915760790c082c3ccda69f0a6"} Jan 21 16:18:13 crc kubenswrapper[4902]: I0121 16:18:13.064576 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-k7rr4"] Jan 21 16:18:14 crc kubenswrapper[4902]: I0121 16:18:14.072805 4902 generic.go:334] "Generic (PLEG): container finished" podID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerID="a5e85813eaf0a006813b7355e03235929731ddf57140673e0f3ee7fa69ff26ae" exitCode=0 Jan 21 16:18:14 crc kubenswrapper[4902]: I0121 16:18:14.072885 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"a5e85813eaf0a006813b7355e03235929731ddf57140673e0f3ee7fa69ff26ae"} Jan 21 16:18:14 crc kubenswrapper[4902]: I0121 16:18:14.306216 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610eddf1-f5de-40bb-8946-2092c4edfa9c" path="/var/lib/kubelet/pods/610eddf1-f5de-40bb-8946-2092c4edfa9c/volumes" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.447067 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.593059 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") pod \"052d7e2b-1135-41ae-8c3e-a750c22fce27\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.593675 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") pod \"052d7e2b-1135-41ae-8c3e-a750c22fce27\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.593745 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") pod \"052d7e2b-1135-41ae-8c3e-a750c22fce27\" (UID: \"052d7e2b-1135-41ae-8c3e-a750c22fce27\") " Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.597726 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle" (OuterVolumeSpecName: "bundle") pod "052d7e2b-1135-41ae-8c3e-a750c22fce27" (UID: "052d7e2b-1135-41ae-8c3e-a750c22fce27"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.603339 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b" (OuterVolumeSpecName: "kube-api-access-7n55b") pod "052d7e2b-1135-41ae-8c3e-a750c22fce27" (UID: "052d7e2b-1135-41ae-8c3e-a750c22fce27"). InnerVolumeSpecName "kube-api-access-7n55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.608636 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util" (OuterVolumeSpecName: "util") pod "052d7e2b-1135-41ae-8c3e-a750c22fce27" (UID: "052d7e2b-1135-41ae-8c3e-a750c22fce27"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.696788 4902 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-util\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.696840 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n55b\" (UniqueName: \"kubernetes.io/projected/052d7e2b-1135-41ae-8c3e-a750c22fce27-kube-api-access-7n55b\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:15 crc kubenswrapper[4902]: I0121 16:18:15.696858 4902 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/052d7e2b-1135-41ae-8c3e-a750c22fce27-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:16 crc kubenswrapper[4902]: I0121 16:18:16.101298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" event={"ID":"052d7e2b-1135-41ae-8c3e-a750c22fce27","Type":"ContainerDied","Data":"78d219e3bb4b4d28dfde0cb06bec377b18c0b547c9f904dee848391bb2a7d6f8"} Jan 21 16:18:16 crc kubenswrapper[4902]: I0121 16:18:16.101585 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d219e3bb4b4d28dfde0cb06bec377b18c0b547c9f904dee848391bb2a7d6f8" Jan 21 16:18:16 crc kubenswrapper[4902]: I0121 16:18:16.101365 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn" Jan 21 16:18:22 crc kubenswrapper[4902]: I0121 16:18:22.295613 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:22 crc kubenswrapper[4902]: E0121 16:18:22.296305 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.338512 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr"] Jan 21 16:18:26 crc kubenswrapper[4902]: E0121 16:18:26.339646 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="extract" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339665 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="extract" Jan 21 16:18:26 crc kubenswrapper[4902]: E0121 16:18:26.339684 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="pull" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339692 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="pull" Jan 21 16:18:26 crc kubenswrapper[4902]: E0121 16:18:26.339709 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="util" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339719 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="util" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.339942 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d7e2b-1135-41ae-8c3e-a750c22fce27" containerName="extract" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.340919 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.343893 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-wrvml" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.344183 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.344330 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.360690 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.472737 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.474158 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.477070 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zb9cz" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.477510 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.500095 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.501672 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.526027 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.538282 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q654\" (UniqueName: \"kubernetes.io/projected/5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5-kube-api-access-2q654\") pod \"obo-prometheus-operator-68bc856cb9-tw4cr\" (UID: \"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.562504 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.639975 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q654\" (UniqueName: \"kubernetes.io/projected/5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5-kube-api-access-2q654\") pod \"obo-prometheus-operator-68bc856cb9-tw4cr\" (UID: \"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640052 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640152 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640283 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.640366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.677622 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6xc5d"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.678487 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q654\" (UniqueName: \"kubernetes.io/projected/5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5-kube-api-access-2q654\") pod \"obo-prometheus-operator-68bc856cb9-tw4cr\" (UID: \"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.679349 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.686528 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.686760 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rm684" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.690742 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6xc5d"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.741967 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.742098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.742154 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.742225 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.746366 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.746700 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dce978e0-318d-4086-8594-08da83f1fe23-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-l469x\" (UID: \"dce978e0-318d-4086-8594-08da83f1fe23\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.749215 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.752035 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c014cd52-9da2-4fa7-96b6-0a400835f56e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cb6669c59-csqks\" (UID: \"c014cd52-9da2-4fa7-96b6-0a400835f56e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.789128 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k6f6k"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.790739 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.794473 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-llxvm" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.798746 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.808081 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k6f6k"] Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.855031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.855196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs8l\" (UniqueName: \"kubernetes.io/projected/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-kube-api-access-wfs8l\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.862826 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.941276 4902 scope.go:117] "RemoveContainer" containerID="73644d909cc281b656bcc92b7fe668f43e2f43e5a2df8a9a26185cf7ab096d45" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.958520 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.965997 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcll\" (UniqueName: \"kubernetes.io/projected/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-kube-api-access-dwcll\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.966617 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.966902 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.966973 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfs8l\" (UniqueName: \"kubernetes.io/projected/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-kube-api-access-wfs8l\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:26 crc kubenswrapper[4902]: I0121 16:18:26.985352 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.012601 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfs8l\" (UniqueName: \"kubernetes.io/projected/cdfe14cf-a2d6-4df7-92b5-c4146bdab44d-kube-api-access-wfs8l\") pod \"observability-operator-59bdc8b94-6xc5d\" (UID: \"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d\") " pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.024338 4902 scope.go:117] "RemoveContainer" containerID="f9ff394d565c17472cbe0972635a74048f6673c7d9a12c90517226508f39624b" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.069330 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwcll\" (UniqueName: \"kubernetes.io/projected/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-kube-api-access-dwcll\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.069654 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.070532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.086319 4902 scope.go:117] "RemoveContainer" containerID="08d576dd917c4a5813c6d9db476bd6fcba6691cafc01f2c3b9a02a013671f644" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.090631 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwcll\" (UniqueName: \"kubernetes.io/projected/ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a-kube-api-access-dwcll\") pod \"perses-operator-5bf474d74f-k6f6k\" (UID: \"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a\") " pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.162283 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.313224 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.380523 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x"] Jan 21 16:18:27 crc kubenswrapper[4902]: W0121 16:18:27.413125 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddce978e0_318d_4086_8594_08da83f1fe23.slice/crio-06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae WatchSource:0}: Error finding container 06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae: Status 404 returned error can't find the container with id 06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.481832 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks"] Jan 21 16:18:27 crc kubenswrapper[4902]: W0121 16:18:27.624157 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bef9b7b_7b8b_4a3b_82ca_cc12bfa8d7a5.slice/crio-c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c WatchSource:0}: Error finding container c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c: Status 404 returned error can't find the container with id c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.634237 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr"] Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.822196 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6xc5d"] Jan 21 16:18:27 crc kubenswrapper[4902]: I0121 16:18:27.930533 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k6f6k"] Jan 21 16:18:27 crc kubenswrapper[4902]: W0121 16:18:27.938004 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea8d550d_3cd6_4d90_9209_f11bbf7d4e3a.slice/crio-813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b WatchSource:0}: Error finding container 813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b: Status 404 returned error can't find the container with id 813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.256143 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" event={"ID":"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d","Type":"ContainerStarted","Data":"089b6b52baec51ea81aa390dfae2c9f909a4e8e2b79dd484df55340e6e1f58fa"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.259954 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" event={"ID":"c014cd52-9da2-4fa7-96b6-0a400835f56e","Type":"ContainerStarted","Data":"1c0681d4cbfe8ce60ee3345a1f6d3377c0033bcd55ab3ea739533131e9008e02"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.263350 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" event={"ID":"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5","Type":"ContainerStarted","Data":"c9e3ec7b76a75727f0b440c1c28989960e5fbedaeea7de46806fdff61bd2465c"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.265326 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" event={"ID":"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a","Type":"ContainerStarted","Data":"813c998872b17b9d7ee3b9b5723b300c843217cc1529bcd7e89717662e75c39b"} Jan 21 16:18:28 crc kubenswrapper[4902]: I0121 16:18:28.269164 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" event={"ID":"dce978e0-318d-4086-8594-08da83f1fe23","Type":"ContainerStarted","Data":"06baaa8ce7f1ee2729da5b7463082d9063adfdccc1ce078700fcef828c7c54ae"} Jan 21 16:18:35 crc kubenswrapper[4902]: I0121 16:18:35.296059 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:35 crc kubenswrapper[4902]: E0121 16:18:35.296967 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.415832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" event={"ID":"ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a","Type":"ContainerStarted","Data":"1dd6288d3dce16efddd74e9158b867d9611e49f940f2c1411740d9d7dd589b64"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.417325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.419701 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" event={"ID":"cdfe14cf-a2d6-4df7-92b5-c4146bdab44d","Type":"ContainerStarted","Data":"57d93f2e374b6d8ce1ce82e849b147adf267dbddd1e75268a89cc64514ade57b"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.420751 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.424579 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.428939 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" event={"ID":"c014cd52-9da2-4fa7-96b6-0a400835f56e","Type":"ContainerStarted","Data":"1d8e9ee848c15b0397cea97beacec61eb32e7f5507bfdeb91b4ff5760f150317"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.434526 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" event={"ID":"dce978e0-318d-4086-8594-08da83f1fe23","Type":"ContainerStarted","Data":"b7db0be9f863c77dbaecee87b14573b8c2484dc9ba565f7f8666329296adf971"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.444071 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" event={"ID":"5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5","Type":"ContainerStarted","Data":"ccad71d335f9ae9bfad642b05db21d00feb1eae6305188b52d64aa7aaa2a201c"} Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.446718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" podStartSLOduration=2.962584365 podStartE2EDuration="11.446699682s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.940794722 +0000 UTC m=+6270.017627751" lastFinishedPulling="2026-01-21 16:18:36.424910039 +0000 UTC m=+6278.501743068" observedRunningTime="2026-01-21 16:18:37.4402155 +0000 UTC m=+6279.517048519" watchObservedRunningTime="2026-01-21 16:18:37.446699682 +0000 UTC m=+6279.523532711" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.464731 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6xc5d" podStartSLOduration=2.802443619 podStartE2EDuration="11.464714159s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.831606409 +0000 UTC m=+6269.908439438" lastFinishedPulling="2026-01-21 16:18:36.493876949 +0000 UTC m=+6278.570709978" observedRunningTime="2026-01-21 16:18:37.460192592 +0000 UTC m=+6279.537025621" watchObservedRunningTime="2026-01-21 16:18:37.464714159 +0000 UTC m=+6279.541547188" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.494905 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-l469x" podStartSLOduration=2.578718793 podStartE2EDuration="11.494885788s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.415691685 +0000 UTC m=+6269.492524714" lastFinishedPulling="2026-01-21 16:18:36.33185868 +0000 UTC m=+6278.408691709" observedRunningTime="2026-01-21 16:18:37.488517409 +0000 UTC m=+6279.565350438" watchObservedRunningTime="2026-01-21 16:18:37.494885788 +0000 UTC m=+6279.571718817" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.523328 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cb6669c59-csqks" podStartSLOduration=2.711758907 podStartE2EDuration="11.523304018s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.52033542 +0000 UTC m=+6269.597168459" lastFinishedPulling="2026-01-21 16:18:36.331880541 +0000 UTC m=+6278.408713570" observedRunningTime="2026-01-21 16:18:37.516533998 +0000 UTC m=+6279.593367027" watchObservedRunningTime="2026-01-21 16:18:37.523304018 +0000 UTC m=+6279.600137047" Jan 21 16:18:37 crc kubenswrapper[4902]: I0121 16:18:37.558518 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-tw4cr" podStartSLOduration=2.860319139 podStartE2EDuration="11.558500429s" podCreationTimestamp="2026-01-21 16:18:26 +0000 UTC" firstStartedPulling="2026-01-21 16:18:27.638132215 +0000 UTC m=+6269.714965244" lastFinishedPulling="2026-01-21 16:18:36.336313505 +0000 UTC m=+6278.413146534" observedRunningTime="2026-01-21 16:18:37.554486056 +0000 UTC m=+6279.631319085" watchObservedRunningTime="2026-01-21 16:18:37.558500429 +0000 UTC m=+6279.635333458" Jan 21 16:18:47 crc kubenswrapper[4902]: I0121 16:18:47.295900 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:18:47 crc kubenswrapper[4902]: E0121 16:18:47.296687 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:18:47 crc kubenswrapper[4902]: I0121 16:18:47.315600 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-k6f6k" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.773422 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.774033 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" containerID="cri-o://0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1" gracePeriod=2 Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.782384 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.823723 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: E0121 16:18:49.824190 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.824208 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.824410 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerName="openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.825250 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.837407 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.861294 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f901a0e2-6941-4d4e-a90a-2905acf87521" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991421 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991551 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991597 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:49 crc kubenswrapper[4902]: I0121 16:18:49.991928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.015909 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.018803 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.029438 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-89mvr" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.032340 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095436 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095658 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.095713 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.096484 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.103795 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.118600 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.140697 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"openstackclient\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.170562 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.204532 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"kube-state-metrics-0\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.312505 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"kube-state-metrics-0\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.357285 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"kube-state-metrics-0\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.632786 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.822082 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.834780 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.840985 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.851125 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-c52zv" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.851709 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.851810 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.855784 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.865368 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.933866 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934079 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934124 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934168 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj2c\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-kube-api-access-rnj2c\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934227 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:50 crc kubenswrapper[4902]: I0121 16:18:50.934248 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.036683 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037024 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037088 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037121 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037156 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj2c\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-kube-api-access-rnj2c\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037199 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.037218 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.043118 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.043218 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070263 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070376 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070558 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.070567 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657b791a-81e2-483e-8ae9-b261f3bc0c41-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.080684 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj2c\" (UniqueName: \"kubernetes.io/projected/657b791a-81e2-483e-8ae9-b261f3bc0c41-kube-api-access-rnj2c\") pod \"alertmanager-metric-storage-0\" (UID: \"657b791a-81e2-483e-8ae9-b261f3bc0c41\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.194636 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.260505 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.468905 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.491820 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.494716 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.495305 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.508755 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.508960 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.509137 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.509286 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.509426 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.512002 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nkftv" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.559883 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.559970 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.560007 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.560077 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.590526 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598727 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598880 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598938 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.598966 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.599019 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.599107 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.697416 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.703528 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.703655 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704077 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704182 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704328 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704425 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704453 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704512 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704590 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.704687 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.712669 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.712758 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.713188 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.720301 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.724203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.730819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.734768 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.734820 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2072cb44e63c79cbe1d1309d1abe2e89a3820c0b6ba86768b97aed1379d46137/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.746959 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.748857 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.749356 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.757266 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.794909 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerStarted","Data":"6cce11915f96493257a7b6fc755ce2c6cf10806ef6428a4421a57569fde4b038"} Jan 21 16:18:51 crc kubenswrapper[4902]: I0121 16:18:51.801910 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fbbd7fc-3ed5-4747-8723-d1b24677c146","Type":"ContainerStarted","Data":"01cb86272a3a106d5f7caa4d00fe9d1b23462318a93651ce85d1c2e200e45add"} Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.192080 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:18:52 crc kubenswrapper[4902]: W0121 16:18:52.309649 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod657b791a_81e2_483e_8ae9_b261f3bc0c41.slice/crio-48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490 WatchSource:0}: Error finding container 48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490: Status 404 returned error can't find the container with id 48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490 Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.400380 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.448113 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.820821 4902 generic.go:334] "Generic (PLEG): container finished" podID="f901a0e2-6941-4d4e-a90a-2905acf87521" containerID="0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1" exitCode=137 Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.822547 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"48dae7a089436e6fe913bb9894877d101fb5d1e2ac226b31a442e91d78824490"} Jan 21 16:18:52 crc kubenswrapper[4902]: I0121 16:18:52.994855 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:18:52 crc kubenswrapper[4902]: W0121 16:18:52.998058 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd53316_beed_49bb_8eec_a78efdb19f0a.slice/crio-0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985 WatchSource:0}: Error finding container 0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985: Status 404 returned error can't find the container with id 0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985 Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.381268 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.464765 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.464961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.465061 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.465241 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") pod \"f901a0e2-6941-4d4e-a90a-2905acf87521\" (UID: \"f901a0e2-6941-4d4e-a90a-2905acf87521\") " Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.471380 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk" (OuterVolumeSpecName: "kube-api-access-89zjk") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "kube-api-access-89zjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.493942 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.506155 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.533565 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f901a0e2-6941-4d4e-a90a-2905acf87521" (UID: "f901a0e2-6941-4d4e-a90a-2905acf87521"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567460 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567522 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567533 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89zjk\" (UniqueName: \"kubernetes.io/projected/f901a0e2-6941-4d4e-a90a-2905acf87521-kube-api-access-89zjk\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.567547 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f901a0e2-6941-4d4e-a90a-2905acf87521-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.832463 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fbbd7fc-3ed5-4747-8723-d1b24677c146","Type":"ContainerStarted","Data":"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95"} Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.836278 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerStarted","Data":"2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169"} Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.836355 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.838229 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.838279 4902 scope.go:117] "RemoveContainer" containerID="0820f291e3e79ca9f589a0a9fd094ceca1ca151624389e86bb426b3920d38db1" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.839542 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985"} Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.856491 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f901a0e2-6941-4d4e-a90a-2905acf87521" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.857303 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.857272371 podStartE2EDuration="4.857272371s" podCreationTimestamp="2026-01-21 16:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:18:53.85331369 +0000 UTC m=+6295.930146729" watchObservedRunningTime="2026-01-21 16:18:53.857272371 +0000 UTC m=+6295.934105400" Jan 21 16:18:53 crc kubenswrapper[4902]: I0121 16:18:53.882537 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.337981969 podStartE2EDuration="4.882517262s" podCreationTimestamp="2026-01-21 16:18:49 +0000 UTC" firstStartedPulling="2026-01-21 16:18:51.756966388 +0000 UTC m=+6293.833799417" lastFinishedPulling="2026-01-21 16:18:53.301501681 +0000 UTC m=+6295.378334710" observedRunningTime="2026-01-21 16:18:53.87249357 +0000 UTC m=+6295.949326609" watchObservedRunningTime="2026-01-21 16:18:53.882517262 +0000 UTC m=+6295.959350291" Jan 21 16:18:54 crc kubenswrapper[4902]: I0121 16:18:54.314902 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f901a0e2-6941-4d4e-a90a-2905acf87521" path="/var/lib/kubelet/pods/f901a0e2-6941-4d4e-a90a-2905acf87521/volumes" Jan 21 16:18:59 crc kubenswrapper[4902]: I0121 16:18:59.906208 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"ce5537c27ef67a56a41ab21272778326e134e9f64738ecf2ae425325e61ed791"} Jan 21 16:18:59 crc kubenswrapper[4902]: I0121 16:18:59.908091 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9"} Jan 21 16:19:00 crc kubenswrapper[4902]: I0121 16:19:00.295014 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:19:00 crc kubenswrapper[4902]: E0121 16:19:00.295388 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:19:00 crc kubenswrapper[4902]: I0121 16:19:00.642183 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.976700 4902 generic.go:334] "Generic (PLEG): container finished" podID="657b791a-81e2-483e-8ae9-b261f3bc0c41" containerID="ce5537c27ef67a56a41ab21272778326e134e9f64738ecf2ae425325e61ed791" exitCode=0 Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.976840 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerDied","Data":"ce5537c27ef67a56a41ab21272778326e134e9f64738ecf2ae425325e61ed791"} Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.980522 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9" exitCode=0 Jan 21 16:19:06 crc kubenswrapper[4902]: I0121 16:19:06.980645 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9"} Jan 21 16:19:10 crc kubenswrapper[4902]: I0121 16:19:10.031270 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"77de32e6d676c373333b37260bfafcd1458b14d092faf9c4240b79e643a0cd70"} Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.070304 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"657b791a-81e2-483e-8ae9-b261f3bc0c41","Type":"ContainerStarted","Data":"9e9837593f8094b124af773e99b3c2e25b7283a441c08e6df1ca3384c6ba9061"} Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.071344 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.073838 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:19:14 crc kubenswrapper[4902]: I0121 16:19:14.098122 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.784775432 podStartE2EDuration="24.098101756s" podCreationTimestamp="2026-01-21 16:18:50 +0000 UTC" firstStartedPulling="2026-01-21 16:18:52.319697894 +0000 UTC m=+6294.396530923" lastFinishedPulling="2026-01-21 16:19:09.633024218 +0000 UTC m=+6311.709857247" observedRunningTime="2026-01-21 16:19:14.094238277 +0000 UTC m=+6316.171071306" watchObservedRunningTime="2026-01-21 16:19:14.098101756 +0000 UTC m=+6316.174934775" Jan 21 16:19:15 crc kubenswrapper[4902]: I0121 16:19:15.294896 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:19:15 crc kubenswrapper[4902]: E0121 16:19:15.295428 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:19:17 crc kubenswrapper[4902]: I0121 16:19:17.102737 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d"} Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.045652 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.058246 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.073991 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-47gxx"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.085362 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-acb5-account-create-update-v87vq"] Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.328108 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fe5022-2b6f-46b9-9275-c8a809b32808" path="/var/lib/kubelet/pods/91fe5022-2b6f-46b9-9275-c8a809b32808/volumes" Jan 21 16:19:18 crc kubenswrapper[4902]: I0121 16:19:18.329333 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95" path="/var/lib/kubelet/pods/f5062b64-8c2a-46ee-ab92-3eb4d6e3fe95/volumes" Jan 21 16:19:21 crc kubenswrapper[4902]: I0121 16:19:21.142090 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1"} Jan 21 16:19:24 crc kubenswrapper[4902]: I0121 16:19:24.180413 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerStarted","Data":"cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9"} Jan 21 16:19:24 crc kubenswrapper[4902]: I0121 16:19:24.217214 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.344372818 podStartE2EDuration="34.217191132s" podCreationTimestamp="2026-01-21 16:18:50 +0000 UTC" firstStartedPulling="2026-01-21 16:18:53.000711767 +0000 UTC m=+6295.077544806" lastFinishedPulling="2026-01-21 16:19:23.873530091 +0000 UTC m=+6325.950363120" observedRunningTime="2026-01-21 16:19:24.207016265 +0000 UTC m=+6326.283849294" watchObservedRunningTime="2026-01-21 16:19:24.217191132 +0000 UTC m=+6326.294024161" Jan 21 16:19:25 crc kubenswrapper[4902]: I0121 16:19:25.034012 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:19:25 crc kubenswrapper[4902]: I0121 16:19:25.043517 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8xw4q"] Jan 21 16:19:26 crc kubenswrapper[4902]: I0121 16:19:26.295603 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:19:26 crc kubenswrapper[4902]: I0121 16:19:26.312541 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3" path="/var/lib/kubelet/pods/8ba2b6d5-88af-4c5d-93dd-21ed05fe3ba3/volumes" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.411697 4902 scope.go:117] "RemoveContainer" containerID="203c5f96aeff362658b5520a6e9eab7da26f8f63fd730b8b01fac5d263703aa2" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.448871 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.454428 4902 scope.go:117] "RemoveContainer" containerID="fa806723dfd7c0c4b6154749911e6912458d2480fc0fa40932f24e709061ffad" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.516025 4902 scope.go:117] "RemoveContainer" containerID="896306bd2b1df34ec4addf4110626bc7531717802d050ed131267e70790b5a08" Jan 21 16:19:27 crc kubenswrapper[4902]: I0121 16:19:27.929931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce"} Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.104887 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.108543 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.114631 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.120065 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.128941 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211436 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211496 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211562 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211618 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211720 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211754 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.211773 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313451 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313514 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313617 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313703 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313737 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.313756 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.314121 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.314157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.320565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.321107 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.330402 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.330897 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.338343 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"ceilometer-0\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.445068 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:19:32 crc kubenswrapper[4902]: I0121 16:19:32.917233 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:19:32 crc kubenswrapper[4902]: W0121 16:19:32.920975 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69a60b6_5623_4c6c_aaac_8d944a90748a.slice/crio-b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61 WatchSource:0}: Error finding container b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61: Status 404 returned error can't find the container with id b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61 Jan 21 16:19:33 crc kubenswrapper[4902]: I0121 16:19:33.019519 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61"} Jan 21 16:19:34 crc kubenswrapper[4902]: I0121 16:19:34.041535 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c"} Jan 21 16:19:35 crc kubenswrapper[4902]: I0121 16:19:35.069084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636"} Jan 21 16:19:36 crc kubenswrapper[4902]: I0121 16:19:36.080849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce"} Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.099291 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerStarted","Data":"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b"} Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.099805 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.124712 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.373517893 podStartE2EDuration="5.124685363s" podCreationTimestamp="2026-01-21 16:19:32 +0000 UTC" firstStartedPulling="2026-01-21 16:19:32.923563491 +0000 UTC m=+6335.000396520" lastFinishedPulling="2026-01-21 16:19:36.674730951 +0000 UTC m=+6338.751563990" observedRunningTime="2026-01-21 16:19:37.122934183 +0000 UTC m=+6339.199767212" watchObservedRunningTime="2026-01-21 16:19:37.124685363 +0000 UTC m=+6339.201518402" Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.450266 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:37 crc kubenswrapper[4902]: I0121 16:19:37.453295 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:38 crc kubenswrapper[4902]: I0121 16:19:38.109938 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.643662 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.644670 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" containerID="cri-o://eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" gracePeriod=2 Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.656553 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.687220 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: E0121 16:19:39.687664 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.687687 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.687917 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerName="openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.688670 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.701324 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.716759 4902 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" podUID="052c7402-6934-4f86-bb78-e83d7da3b587" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.808926 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-combined-ca-bundle\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.808980 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d97v2\" (UniqueName: \"kubernetes.io/projected/052c7402-6934-4f86-bb78-e83d7da3b587-kube-api-access-d97v2\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.809074 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config-secret\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.809544 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912027 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-combined-ca-bundle\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912082 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d97v2\" (UniqueName: \"kubernetes.io/projected/052c7402-6934-4f86-bb78-e83d7da3b587-kube-api-access-d97v2\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912124 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config-secret\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.912350 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.913175 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.918757 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-openstack-config-secret\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.922057 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052c7402-6934-4f86-bb78-e83d7da3b587-combined-ca-bundle\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:39 crc kubenswrapper[4902]: I0121 16:19:39.928543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d97v2\" (UniqueName: \"kubernetes.io/projected/052c7402-6934-4f86-bb78-e83d7da3b587-kube-api-access-d97v2\") pod \"openstackclient\" (UID: \"052c7402-6934-4f86-bb78-e83d7da3b587\") " pod="openstack/openstackclient" Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.009056 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.603877 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:19:40 crc kubenswrapper[4902]: W0121 16:19:40.623404 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052c7402_6934_4f86_bb78_e83d7da3b587.slice/crio-da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2 WatchSource:0}: Error finding container da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2: Status 404 returned error can't find the container with id da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2 Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.960832 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.962330 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:40 crc kubenswrapper[4902]: I0121 16:19:40.979906 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071551 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071839 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" containerID="cri-o://9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d" gracePeriod=600 Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071895 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" containerID="cri-o://fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1" gracePeriod=600 Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.071927 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" containerID="cri-o://cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9" gracePeriod=600 Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.139631 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.140203 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.162151 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"052c7402-6934-4f86-bb78-e83d7da3b587","Type":"ContainerStarted","Data":"21ff18962aadea4c8d1af735e1fe49f6745b23f53882b2305982a44ac9e132be"} Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.162193 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"052c7402-6934-4f86-bb78-e83d7da3b587","Type":"ContainerStarted","Data":"da15f1e02843264e59b668c244d416698a18a74ddf52c69eca97ac495d0a20d2"} Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.179746 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.179727463 podStartE2EDuration="2.179727463s" podCreationTimestamp="2026-01-21 16:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:19:41.178663203 +0000 UTC m=+6343.255496232" watchObservedRunningTime="2026-01-21 16:19:41.179727463 +0000 UTC m=+6343.256560492" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.204607 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.206220 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.209024 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.215926 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.242596 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.242703 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.244178 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.266696 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"aodh-db-create-v4xqk\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.325660 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.349015 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.362001 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.464536 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.464705 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.466255 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.495982 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"aodh-fe19-account-create-update-m4ndc\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:41 crc kubenswrapper[4902]: I0121 16:19:41.559141 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.032928 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.096728 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.265149 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.296306 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9" exitCode=0 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.296336 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1" exitCode=0 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.296345 4902 generic.go:334] "Generic (PLEG): container finished" podID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerID="9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d" exitCode=0 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.304015 4902 generic.go:334] "Generic (PLEG): container finished" podID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" exitCode=137 Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.305999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365280 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365458 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365668 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.365981 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") pod \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\" (UID: \"7fbbd7fc-3ed5-4747-8723-d1b24677c146\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.395356 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m" (OuterVolumeSpecName: "kube-api-access-6lx4m") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "kube-api-access-6lx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402661 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402713 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402724 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.402735 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4xqk" event={"ID":"4f9de683-01b0-4513-8e18-d56361ae4bc6","Type":"ContainerStarted","Data":"03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339"} Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.403713 4902 scope.go:117] "RemoveContainer" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.447565 4902 scope.go:117] "RemoveContainer" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" Jan 21 16:19:42 crc kubenswrapper[4902]: E0121 16:19:42.448116 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95\": container with ID starting with eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95 not found: ID does not exist" containerID="eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.448155 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95"} err="failed to get container status \"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95\": rpc error: code = NotFound desc = could not find container \"eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95\": container with ID starting with eeda23de981561c4ee07dfd2013c375cea0893455901d9904cf9c61704674e95 not found: ID does not exist" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.472269 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lx4m\" (UniqueName: \"kubernetes.io/projected/7fbbd7fc-3ed5-4747-8723-d1b24677c146-kube-api-access-6lx4m\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.491148 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.496763 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.503656 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.533947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7fbbd7fc-3ed5-4747-8723-d1b24677c146" (UID: "7fbbd7fc-3ed5-4747-8723-d1b24677c146"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.576256 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.576299 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.576312 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbbd7fc-3ed5-4747-8723-d1b24677c146-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677436 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677811 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677883 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.677971 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678020 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678084 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678118 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678149 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678352 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.678434 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") pod \"5bd53316-beed-49bb-8eec-a78efdb19f0a\" (UID: \"5bd53316-beed-49bb-8eec-a78efdb19f0a\") " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.679550 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.679883 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.680161 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.719672 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out" (OuterVolumeSpecName: "config-out") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.719918 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc" (OuterVolumeSpecName: "kube-api-access-ldpwc") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "kube-api-access-ldpwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.721080 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.721514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config" (OuterVolumeSpecName: "config") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.722655 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.724303 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config" (OuterVolumeSpecName: "web-config") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.740571 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "5bd53316-beed-49bb-8eec-a78efdb19f0a" (UID: "5bd53316-beed-49bb-8eec-a78efdb19f0a"). InnerVolumeSpecName "pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781917 4902 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781955 4902 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5bd53316-beed-49bb-8eec-a78efdb19f0a-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781965 4902 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781974 4902 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781983 4902 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.781994 4902 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782006 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldpwc\" (UniqueName: \"kubernetes.io/projected/5bd53316-beed-49bb-8eec-a78efdb19f0a-kube-api-access-ldpwc\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782015 4902 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5bd53316-beed-49bb-8eec-a78efdb19f0a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782085 4902 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") on node \"crc\" " Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.782098 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bd53316-beed-49bb-8eec-a78efdb19f0a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.820993 4902 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.821165 4902 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc") on node "crc" Jan 21 16:19:42 crc kubenswrapper[4902]: I0121 16:19:42.883806 4902 reconciler_common.go:293] "Volume detached for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.315686 4902 generic.go:334] "Generic (PLEG): container finished" podID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerID="04e8685d31a4c1b85ba91615c510f74e4584d6a0993549e22bc5847f14ee429d" exitCode=0 Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.315776 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4xqk" event={"ID":"4f9de683-01b0-4513-8e18-d56361ae4bc6","Type":"ContainerDied","Data":"04e8685d31a4c1b85ba91615c510f74e4584d6a0993549e22bc5847f14ee429d"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.319491 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5bd53316-beed-49bb-8eec-a78efdb19f0a","Type":"ContainerDied","Data":"0fc37cbdb7d70159ca8295f97b2fa25e8f7409a5c1a417f96662f1f15ccc1985"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.319548 4902 scope.go:117] "RemoveContainer" containerID="cd406b88fdad163c7edea30845b9be98daefef8f90607411f070f18d212dccf9" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.319823 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.339851 4902 generic.go:334] "Generic (PLEG): container finished" podID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerID="a17204ae8500af5c3ac489e63a42369874fd6943aaf98b293789e79f2dc7c291" exitCode=0 Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.340138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe19-account-create-update-m4ndc" event={"ID":"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea","Type":"ContainerDied","Data":"a17204ae8500af5c3ac489e63a42369874fd6943aaf98b293789e79f2dc7c291"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.340194 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe19-account-create-update-m4ndc" event={"ID":"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea","Type":"ContainerStarted","Data":"da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062"} Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.365907 4902 scope.go:117] "RemoveContainer" containerID="fe72e7d7b7c14cd45027bed85d16a5a727a64a07ece72e2a953a58f5e1029bf1" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.397478 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.410248 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.436103 4902 scope.go:117] "RemoveContainer" containerID="9a355b75289815a6e09b6415743b2be3282dad6b45d280d3916521198ab7d34d" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.445425 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.445974 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.445997 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.446059 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="init-config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446070 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="init-config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.446102 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446111 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" Jan 21 16:19:43 crc kubenswrapper[4902]: E0121 16:19:43.446133 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446138 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446329 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446352 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="config-reloader" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.446364 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="thanos-sidecar" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.448978 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456334 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456377 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456405 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456786 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456861 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.456998 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.457684 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.461636 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nkftv" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.465243 4902 scope.go:117] "RemoveContainer" containerID="ac1d3d4a6fd4695fa37a965837770a925659e19ff547b7bc40c9d740b5a8f0f9" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.465732 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.467795 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601025 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601169 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601251 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601404 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601429 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601554 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/294a561c-9181-4330-86e5-ab51e9f3c07c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601607 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601652 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601728 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601782 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601822 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqdm\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-kube-api-access-6xqdm\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.601952 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.703888 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.703952 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704006 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/294a561c-9181-4330-86e5-ab51e9f3c07c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704028 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704099 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704123 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704147 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqdm\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-kube-api-access-6xqdm\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704206 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704233 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704271 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704304 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.704336 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.705294 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.705424 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.706578 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/294a561c-9181-4330-86e5-ab51e9f3c07c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.710062 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.710348 4902 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.710394 4902 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2072cb44e63c79cbe1d1309d1abe2e89a3820c0b6ba86768b97aed1379d46137/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.711622 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.711702 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.712944 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.715012 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.717123 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/294a561c-9181-4330-86e5-ab51e9f3c07c-config\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.724875 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqdm\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-kube-api-access-6xqdm\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.725157 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/294a561c-9181-4330-86e5-ab51e9f3c07c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.730738 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/294a561c-9181-4330-86e5-ab51e9f3c07c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.774444 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5bdea85a-afb0-4ca4-b129-f0ce347faedc\") pod \"prometheus-metric-storage-0\" (UID: \"294a561c-9181-4330-86e5-ab51e9f3c07c\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:43 crc kubenswrapper[4902]: I0121 16:19:43.838932 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.306369 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" path="/var/lib/kubelet/pods/5bd53316-beed-49bb-8eec-a78efdb19f0a/volumes" Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.307596 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbbd7fc-3ed5-4747-8723-d1b24677c146" path="/var/lib/kubelet/pods/7fbbd7fc-3ed5-4747-8723-d1b24677c146/volumes" Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.357803 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:19:44 crc kubenswrapper[4902]: I0121 16:19:44.915580 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.050803 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.057577 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") pod \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.057756 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") pod \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\" (UID: \"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.058437 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" (UID: "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.064735 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh" (OuterVolumeSpecName: "kube-api-access-xnnlh") pod "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" (UID: "947c6da7-eea1-412b-8f8d-f1cdfadcf4ea"). InnerVolumeSpecName "kube-api-access-xnnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.159035 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") pod \"4f9de683-01b0-4513-8e18-d56361ae4bc6\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.159276 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") pod \"4f9de683-01b0-4513-8e18-d56361ae4bc6\" (UID: \"4f9de683-01b0-4513-8e18-d56361ae4bc6\") " Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.159893 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f9de683-01b0-4513-8e18-d56361ae4bc6" (UID: "4f9de683-01b0-4513-8e18-d56361ae4bc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.160099 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.160127 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnnlh\" (UniqueName: \"kubernetes.io/projected/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea-kube-api-access-xnnlh\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.161678 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk" (OuterVolumeSpecName: "kube-api-access-mhrqk") pod "4f9de683-01b0-4513-8e18-d56361ae4bc6" (UID: "4f9de683-01b0-4513-8e18-d56361ae4bc6"). InnerVolumeSpecName "kube-api-access-mhrqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.263394 4902 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f9de683-01b0-4513-8e18-d56361ae4bc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.263430 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrqk\" (UniqueName: \"kubernetes.io/projected/4f9de683-01b0-4513-8e18-d56361ae4bc6-kube-api-access-mhrqk\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.367431 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fe19-account-create-update-m4ndc" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.367427 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fe19-account-create-update-m4ndc" event={"ID":"947c6da7-eea1-412b-8f8d-f1cdfadcf4ea","Type":"ContainerDied","Data":"da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062"} Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.367548 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.368984 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"55af016233e8bdb2e69339f03e2e871b89a824bcde36fb6977c57f1e39316cdb"} Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.371603 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4xqk" event={"ID":"4f9de683-01b0-4513-8e18-d56361ae4bc6","Type":"ContainerDied","Data":"03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339"} Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.371646 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.371714 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4xqk" Jan 21 16:19:45 crc kubenswrapper[4902]: I0121 16:19:45.449532 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="5bd53316-beed-49bb-8eec-a78efdb19f0a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.138:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:19:45 crc kubenswrapper[4902]: E0121 16:19:45.487429 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947c6da7_eea1_412b_8f8d_f1cdfadcf4ea.slice/crio-da0b1780c377ee2c6d363298f097a1e811d58ad4385aef980f7111ef3a9d2062\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f9de683_01b0_4513_8e18_d56361ae4bc6.slice/crio-03782de3cd4352efbf4af45d731178ba78494637b72a83a8537282df4f5d7339\": RecentStats: unable to find data in memory cache]" Jan 21 16:19:49 crc kubenswrapper[4902]: I0121 16:19:49.422849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"a90dbf20357ff7f033de8ec9d2730be60fc8e73c3b80e638a51f94879d51d50d"} Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.315215 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:19:51 crc kubenswrapper[4902]: E0121 16:19:51.315912 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerName="mariadb-account-create-update" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.316019 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerName="mariadb-account-create-update" Jan 21 16:19:51 crc kubenswrapper[4902]: E0121 16:19:51.318941 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerName="mariadb-database-create" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.318989 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerName="mariadb-database-create" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.319463 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" containerName="mariadb-database-create" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.319481 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" containerName="mariadb-account-create-update" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.320500 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327197 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327344 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327494 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zlqm8" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.327495 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.354593 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.465328 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.465885 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.466777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.466824 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569722 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569809 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569846 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.569876 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.578811 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.582406 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.584820 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.589967 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"aodh-db-sync-bvsxp\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:51 crc kubenswrapper[4902]: I0121 16:19:51.651916 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:19:52 crc kubenswrapper[4902]: I0121 16:19:52.130632 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:19:52 crc kubenswrapper[4902]: W0121 16:19:52.134445 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ad5c1ce_9471_430a_b273_873699a86d57.slice/crio-717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81 WatchSource:0}: Error finding container 717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81: Status 404 returned error can't find the container with id 717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81 Jan 21 16:19:52 crc kubenswrapper[4902]: I0121 16:19:52.456733 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerStarted","Data":"717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81"} Jan 21 16:19:55 crc kubenswrapper[4902]: I0121 16:19:55.491334 4902 generic.go:334] "Generic (PLEG): container finished" podID="294a561c-9181-4330-86e5-ab51e9f3c07c" containerID="a90dbf20357ff7f033de8ec9d2730be60fc8e73c3b80e638a51f94879d51d50d" exitCode=0 Jan 21 16:19:55 crc kubenswrapper[4902]: I0121 16:19:55.491819 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerDied","Data":"a90dbf20357ff7f033de8ec9d2730be60fc8e73c3b80e638a51f94879d51d50d"} Jan 21 16:19:56 crc kubenswrapper[4902]: I0121 16:19:56.516502 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerStarted","Data":"8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3"} Jan 21 16:19:56 crc kubenswrapper[4902]: I0121 16:19:56.520267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"6deb6b902dd5dd2154aed2a6c1c7127254a8e01efdfa7a3e5492b8e17c3da362"} Jan 21 16:19:56 crc kubenswrapper[4902]: I0121 16:19:56.546850 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-bvsxp" podStartSLOduration=1.793954152 podStartE2EDuration="5.546827449s" podCreationTimestamp="2026-01-21 16:19:51 +0000 UTC" firstStartedPulling="2026-01-21 16:19:52.13646686 +0000 UTC m=+6354.213299889" lastFinishedPulling="2026-01-21 16:19:55.889340167 +0000 UTC m=+6357.966173186" observedRunningTime="2026-01-21 16:19:56.535301795 +0000 UTC m=+6358.612134854" watchObservedRunningTime="2026-01-21 16:19:56.546827449 +0000 UTC m=+6358.623660498" Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.039067 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.049171 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.058034 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-70cb-account-create-update-hprl8"] Jan 21 16:19:57 crc kubenswrapper[4902]: I0121 16:19:57.066787 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xrlb5"] Jan 21 16:19:58 crc kubenswrapper[4902]: I0121 16:19:58.316786 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311b51a9-7349-42c3-8777-e1da9c997866" path="/var/lib/kubelet/pods/311b51a9-7349-42c3-8777-e1da9c997866/volumes" Jan 21 16:19:58 crc kubenswrapper[4902]: I0121 16:19:58.319154 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32dabaa5-86fa-4ff4-9a8e-7cd5360c978c" path="/var/lib/kubelet/pods/32dabaa5-86fa-4ff4-9a8e-7cd5360c978c/volumes" Jan 21 16:19:59 crc kubenswrapper[4902]: I0121 16:19:59.551852 4902 generic.go:334] "Generic (PLEG): container finished" podID="7ad5c1ce-9471-430a-b273-873699a86d57" containerID="8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3" exitCode=0 Jan 21 16:19:59 crc kubenswrapper[4902]: I0121 16:19:59.551899 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerDied","Data":"8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3"} Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.569275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"d56f4095260e9d7560120b406ac9902151f1010693b894dc04fe5a44023bba5c"} Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.569625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"294a561c-9181-4330-86e5-ab51e9f3c07c","Type":"ContainerStarted","Data":"5cd62bbc0874133bb18e6d511f86ad6fa95a5bfc1930982ecd818f9e3d17728e"} Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.608117 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.608101104 podStartE2EDuration="17.608101104s" podCreationTimestamp="2026-01-21 16:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:20:00.596836777 +0000 UTC m=+6362.673669886" watchObservedRunningTime="2026-01-21 16:20:00.608101104 +0000 UTC m=+6362.684934133" Jan 21 16:20:00 crc kubenswrapper[4902]: I0121 16:20:00.997170 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094368 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094519 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094543 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.094711 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") pod \"7ad5c1ce-9471-430a-b273-873699a86d57\" (UID: \"7ad5c1ce-9471-430a-b273-873699a86d57\") " Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.100405 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts" (OuterVolumeSpecName: "scripts") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.100768 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg" (OuterVolumeSpecName: "kube-api-access-9zlsg") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "kube-api-access-9zlsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.123095 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data" (OuterVolumeSpecName: "config-data") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.125829 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ad5c1ce-9471-430a-b273-873699a86d57" (UID: "7ad5c1ce-9471-430a-b273-873699a86d57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197124 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197154 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197164 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ad5c1ce-9471-430a-b273-873699a86d57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.197177 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlsg\" (UniqueName: \"kubernetes.io/projected/7ad5c1ce-9471-430a-b273-873699a86d57-kube-api-access-9zlsg\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.582952 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bvsxp" event={"ID":"7ad5c1ce-9471-430a-b273-873699a86d57","Type":"ContainerDied","Data":"717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81"} Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.583440 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="717d1e1819361222257d7c8c0b607114d7f551f24f2d5ad4f20d3d971a608d81" Jan 21 16:20:01 crc kubenswrapper[4902]: I0121 16:20:01.582999 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bvsxp" Jan 21 16:20:02 crc kubenswrapper[4902]: I0121 16:20:02.457305 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:20:03 crc kubenswrapper[4902]: I0121 16:20:03.036391 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:20:03 crc kubenswrapper[4902]: I0121 16:20:03.044833 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-nnsm5"] Jan 21 16:20:03 crc kubenswrapper[4902]: I0121 16:20:03.839007 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:04 crc kubenswrapper[4902]: I0121 16:20:04.313872 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f532d2b6-7ad3-4b83-9100-d4b94d5a512d" path="/var/lib/kubelet/pods/f532d2b6-7ad3-4b83-9100-d4b94d5a512d/volumes" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.111172 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:06 crc kubenswrapper[4902]: E0121 16:20:06.112198 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" containerName="aodh-db-sync" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.112218 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" containerName="aodh-db-sync" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.112495 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" containerName="aodh-db-sync" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.114830 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.122080 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zlqm8" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.122458 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.122648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.124928 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201485 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201539 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201665 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.201834 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384177 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384222 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384364 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.384522 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.390925 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.407979 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.426858 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.427671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"aodh-0\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " pod="openstack/aodh-0" Jan 21 16:20:06 crc kubenswrapper[4902]: I0121 16:20:06.460687 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:07 crc kubenswrapper[4902]: I0121 16:20:07.173205 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:07 crc kubenswrapper[4902]: I0121 16:20:07.661693 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"6eedcc7523efe081a15cb931149e929c40f8fd79a47397d367b9e351ed5ed0bc"} Jan 21 16:20:08 crc kubenswrapper[4902]: I0121 16:20:08.682285 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc"} Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.427801 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.428745 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" containerID="cri-o://17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.428927 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" containerID="cri-o://feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.428976 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" containerID="cri-o://ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.429007 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" containerID="cri-o://52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" gracePeriod=30 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.726221 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" exitCode=2 Jan 21 16:20:09 crc kubenswrapper[4902]: I0121 16:20:09.726273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce"} Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.277993 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.749507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710"} Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.765959 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" exitCode=0 Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.765998 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" exitCode=0 Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.766074 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b"} Jan 21 16:20:10 crc kubenswrapper[4902]: I0121 16:20:10.766112 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.627519 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.759168 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.759485 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.759667 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760225 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760341 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760465 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760575 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") pod \"d69a60b6-5623-4c6c-aaac-8d944a90748a\" (UID: \"d69a60b6-5623-4c6c-aaac-8d944a90748a\") " Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.760154 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.762547 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.777822 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj" (OuterVolumeSpecName: "kube-api-access-wgtzj") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "kube-api-access-wgtzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.789359 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts" (OuterVolumeSpecName: "scripts") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.818859 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.823481 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825183 4902 generic.go:334] "Generic (PLEG): container finished" podID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" exitCode=0 Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825481 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825320 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825682 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d69a60b6-5623-4c6c-aaac-8d944a90748a","Type":"ContainerDied","Data":"b1f7bb2c211e7960134591749043375f5b8146796c4a6545e0fa123d72475b61"} Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.825701 4902 scope.go:117] "RemoveContainer" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864469 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864517 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864528 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864540 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d69a60b6-5623-4c6c-aaac-8d944a90748a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.864551 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtzj\" (UniqueName: \"kubernetes.io/projected/d69a60b6-5623-4c6c-aaac-8d944a90748a-kube-api-access-wgtzj\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.893983 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data" (OuterVolumeSpecName: "config-data") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.912712 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d69a60b6-5623-4c6c-aaac-8d944a90748a" (UID: "d69a60b6-5623-4c6c-aaac-8d944a90748a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.966185 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4902]: I0121 16:20:12.966214 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d69a60b6-5623-4c6c-aaac-8d944a90748a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.170418 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.188408 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.196654 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197129 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197148 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197166 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197174 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197209 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197218 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: E0121 16:20:13.197231 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197237 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197430 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-notification-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197447 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="proxy-httpd" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197461 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="ceilometer-central-agent" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.197471 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" containerName="sg-core" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.199249 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.202280 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.202372 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.227727 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288901 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288947 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288968 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.288992 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.289066 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.289086 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.289106 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393161 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393500 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393609 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.393960 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.394093 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.394198 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.394326 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.397748 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.401747 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.409251 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.416859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.417727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.421692 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.440150 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"ceilometer-0\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.527319 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.761183 4902 scope.go:117] "RemoveContainer" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.839224 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.848173 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.937447 4902 scope.go:117] "RemoveContainer" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" Jan 21 16:20:13 crc kubenswrapper[4902]: I0121 16:20:13.962349 4902 scope.go:117] "RemoveContainer" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.002531 4902 scope.go:117] "RemoveContainer" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.002928 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b\": container with ID starting with feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b not found: ID does not exist" containerID="feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.002966 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b"} err="failed to get container status \"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b\": rpc error: code = NotFound desc = could not find container \"feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b\": container with ID starting with feb3f9742de2b18e9995a3eb837a0894294b1fdca1a34c6be7f70c79398f480b not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.002997 4902 scope.go:117] "RemoveContainer" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.004344 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce\": container with ID starting with ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce not found: ID does not exist" containerID="ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004376 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce"} err="failed to get container status \"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce\": rpc error: code = NotFound desc = could not find container \"ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce\": container with ID starting with ed9188ce5445b4ea2b9cc306ade65ac33bb507619e8fcd17a82f27843a6e62ce not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004396 4902 scope.go:117] "RemoveContainer" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.004643 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636\": container with ID starting with 52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636 not found: ID does not exist" containerID="52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004662 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636"} err="failed to get container status \"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636\": rpc error: code = NotFound desc = could not find container \"52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636\": container with ID starting with 52fe9ab6a9074c0ee00122875e05462d5b34f4b9d78bd00280ff1e3dd9508636 not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004677 4902 scope.go:117] "RemoveContainer" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" Jan 21 16:20:14 crc kubenswrapper[4902]: E0121 16:20:14.004897 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c\": container with ID starting with 17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c not found: ID does not exist" containerID="17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.004911 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c"} err="failed to get container status \"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c\": rpc error: code = NotFound desc = could not find container \"17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c\": container with ID starting with 17878cad645891b6a6ea4a0f5890b796e5eb765b63c32e713e52769edbf4121c not found: ID does not exist" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.309267 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69a60b6-5623-4c6c-aaac-8d944a90748a" path="/var/lib/kubelet/pods/d69a60b6-5623-4c6c-aaac-8d944a90748a/volumes" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.311335 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.863820 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"6a0ffc37c1dc6797f40e78442f47022b5947c62404a4648910e4832c3ca3e7c8"} Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" containerID="cri-o://a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867656 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerStarted","Data":"dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3"} Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867945 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" containerID="cri-o://dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.867994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" containerID="cri-o://7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.868026 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" containerID="cri-o://80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710" gracePeriod=30 Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.877826 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 16:20:14 crc kubenswrapper[4902]: I0121 16:20:14.893257 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.300447 podStartE2EDuration="8.893233963s" podCreationTimestamp="2026-01-21 16:20:06 +0000 UTC" firstStartedPulling="2026-01-21 16:20:07.180677009 +0000 UTC m=+6369.257510038" lastFinishedPulling="2026-01-21 16:20:13.773463972 +0000 UTC m=+6375.850297001" observedRunningTime="2026-01-21 16:20:14.892947645 +0000 UTC m=+6376.969780674" watchObservedRunningTime="2026-01-21 16:20:14.893233963 +0000 UTC m=+6376.970066992" Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.875943 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.876819 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878568 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77" exitCode=0 Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878656 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710" exitCode=0 Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878718 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc" exitCode=0 Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878646 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878822 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710"} Jan 21 16:20:15 crc kubenswrapper[4902]: I0121 16:20:15.878835 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc"} Jan 21 16:20:16 crc kubenswrapper[4902]: I0121 16:20:16.889469 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012"} Jan 21 16:20:18 crc kubenswrapper[4902]: I0121 16:20:18.909750 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerStarted","Data":"5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef"} Jan 21 16:20:18 crc kubenswrapper[4902]: I0121 16:20:18.910262 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:20:18 crc kubenswrapper[4902]: I0121 16:20:18.930357 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.538001097 podStartE2EDuration="5.930336278s" podCreationTimestamp="2026-01-21 16:20:13 +0000 UTC" firstStartedPulling="2026-01-21 16:20:14.304880817 +0000 UTC m=+6376.381713846" lastFinishedPulling="2026-01-21 16:20:17.697215998 +0000 UTC m=+6379.774049027" observedRunningTime="2026-01-21 16:20:18.929753622 +0000 UTC m=+6381.006586651" watchObservedRunningTime="2026-01-21 16:20:18.930336278 +0000 UTC m=+6381.007169307" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.111660 4902 scope.go:117] "RemoveContainer" containerID="bfee1fd2715dd8d05c9392fd3ab86d1d97c355292e968dc34fcc4d66a846b5d3" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.140404 4902 scope.go:117] "RemoveContainer" containerID="03abc4558e909383d3d41af8248acf4829b9d6450d3df00a2f6958bd3e3264e7" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.220417 4902 scope.go:117] "RemoveContainer" containerID="e3e9c87d9d90cc49442da28637d2cced6b19e9645d80d76c03c98029e5898f54" Jan 21 16:20:28 crc kubenswrapper[4902]: I0121 16:20:28.255945 4902 scope.go:117] "RemoveContainer" containerID="aef8011a0955408b9b496fd1dcaa48e11cb807245e77d4a67f379e75f01adc85" Jan 21 16:20:43 crc kubenswrapper[4902]: I0121 16:20:43.531624 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.448634 4902 generic.go:334] "Generic (PLEG): container finished" podID="76119988-951c-4bee-9832-7ac41e0335de" containerID="dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3" exitCode=137 Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.448714 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3"} Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.820507 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.925813 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.925859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.925946 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.927032 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") pod \"76119988-951c-4bee-9832-7ac41e0335de\" (UID: \"76119988-951c-4bee-9832-7ac41e0335de\") " Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.933427 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww" (OuterVolumeSpecName: "kube-api-access-4b7ww") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "kube-api-access-4b7ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:45 crc kubenswrapper[4902]: I0121 16:20:45.958275 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts" (OuterVolumeSpecName: "scripts") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.030909 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.030950 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b7ww\" (UniqueName: \"kubernetes.io/projected/76119988-951c-4bee-9832-7ac41e0335de-kube-api-access-4b7ww\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.075991 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.090643 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data" (OuterVolumeSpecName: "config-data") pod "76119988-951c-4bee-9832-7ac41e0335de" (UID: "76119988-951c-4bee-9832-7ac41e0335de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.133436 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.133476 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76119988-951c-4bee-9832-7ac41e0335de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.464718 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"76119988-951c-4bee-9832-7ac41e0335de","Type":"ContainerDied","Data":"6eedcc7523efe081a15cb931149e929c40f8fd79a47397d367b9e351ed5ed0bc"} Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.464785 4902 scope.go:117] "RemoveContainer" containerID="dba9d32141eb642334d7b118ec33f493bcf870c1c7a90210f0fe7599d9f406a3" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.464798 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.489811 4902 scope.go:117] "RemoveContainer" containerID="7757c4395511b6240ada2abaaec9c1d3750f8cb7664c807793da11bcff1a2a77" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.491543 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.514115 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.519864 4902 scope.go:117] "RemoveContainer" containerID="80a1fc628ae387280e9939ad7bb8b7e183b4a8c6a2e6094ae36e73a0d3c70710" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537285 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537856 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537876 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537894 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537901 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537908 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537914 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" Jan 21 16:20:46 crc kubenswrapper[4902]: E0121 16:20:46.537952 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.537958 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538234 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-notifier" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538260 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-evaluator" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538270 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-listener" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.538283 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="76119988-951c-4bee-9832-7ac41e0335de" containerName="aodh-api" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.540451 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.540551 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.545125 4902 scope.go:117] "RemoveContainer" containerID="a1aabff299ab969d0cfdbf9f4830081affd8dc382ecd5010dad633aa2dd9aecc" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.551660 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552472 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zlqm8" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552599 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552702 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.552981 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645199 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-public-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645243 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-internal-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645297 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shsrb\" (UniqueName: \"kubernetes.io/projected/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-kube-api-access-shsrb\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645409 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645716 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-config-data\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.645998 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-scripts\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748681 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shsrb\" (UniqueName: \"kubernetes.io/projected/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-kube-api-access-shsrb\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748767 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748879 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-config-data\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.748972 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-scripts\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.749068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-public-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.749104 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-internal-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.753162 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-public-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.753585 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-scripts\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.753848 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-internal-tls-certs\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.754355 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.757680 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-config-data\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.769203 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shsrb\" (UniqueName: \"kubernetes.io/projected/da0893d4-ad82-4a00-8ccf-5e33ead4d85d-kube-api-access-shsrb\") pod \"aodh-0\" (UID: \"da0893d4-ad82-4a00-8ccf-5e33ead4d85d\") " pod="openstack/aodh-0" Jan 21 16:20:46 crc kubenswrapper[4902]: I0121 16:20:46.885367 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.263907 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.264488 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" containerID="cri-o://2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169" gracePeriod=30 Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.381067 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.483620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"42ae57e5d80c3db206b50a9ae70b559daf09dddf48d7ab2aa530f63f0b4e7ebb"} Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.486532 4902 generic.go:334] "Generic (PLEG): container finished" podID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerID="2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169" exitCode=2 Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.486564 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerDied","Data":"2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169"} Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.705648 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.870717 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") pod \"11823665-4fce-4950-a6d3-bc34bafbc01d\" (UID: \"11823665-4fce-4950-a6d3-bc34bafbc01d\") " Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.874386 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8" (OuterVolumeSpecName: "kube-api-access-7xqw8") pod "11823665-4fce-4950-a6d3-bc34bafbc01d" (UID: "11823665-4fce-4950-a6d3-bc34bafbc01d"). InnerVolumeSpecName "kube-api-access-7xqw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:47 crc kubenswrapper[4902]: I0121 16:20:47.973148 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xqw8\" (UniqueName: \"kubernetes.io/projected/11823665-4fce-4950-a6d3-bc34bafbc01d-kube-api-access-7xqw8\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.308325 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76119988-951c-4bee-9832-7ac41e0335de" path="/var/lib/kubelet/pods/76119988-951c-4bee-9832-7ac41e0335de/volumes" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.499246 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"66a3046a4895e9e5103faf47f578caecb488a2e5c0322f867d6a150de6e92f58"} Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.502688 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"11823665-4fce-4950-a6d3-bc34bafbc01d","Type":"ContainerDied","Data":"6cce11915f96493257a7b6fc755ce2c6cf10806ef6428a4421a57569fde4b038"} Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.502726 4902 scope.go:117] "RemoveContainer" containerID="2ef81e85f6901284a8b407191f27c64483362c30e1987b357fa5d21aa8dc8169" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.502868 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.540130 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.550867 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.561347 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: E0121 16:20:48.561855 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.561892 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.562176 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" containerName="kube-state-metrics" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.563024 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.568548 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.570216 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.575286 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6bv\" (UniqueName: \"kubernetes.io/projected/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-api-access-ln6bv\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692364 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692546 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.692593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794311 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794445 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.794615 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6bv\" (UniqueName: \"kubernetes.io/projected/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-api-access-ln6bv\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.799987 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.800159 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.802350 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.816565 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6bv\" (UniqueName: \"kubernetes.io/projected/c4fb45d4-a64d-4e42-86b5-9e3924f0f877-kube-api-access-ln6bv\") pod \"kube-state-metrics-0\" (UID: \"c4fb45d4-a64d-4e42-86b5-9e3924f0f877\") " pod="openstack/kube-state-metrics-0" Jan 21 16:20:48 crc kubenswrapper[4902]: I0121 16:20:48.891367 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.194540 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195007 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" containerID="cri-o://75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195099 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" containerID="cri-o://a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195123 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" containerID="cri-o://5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.195319 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" containerID="cri-o://7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012" gracePeriod=30 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.401407 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:20:49 crc kubenswrapper[4902]: W0121 16:20:49.408497 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fb45d4_a64d_4e42_86b5_9e3924f0f877.slice/crio-2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669 WatchSource:0}: Error finding container 2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669: Status 404 returned error can't find the container with id 2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521186 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef" exitCode=0 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521518 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012" exitCode=2 Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521243 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef"} Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.521638 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012"} Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.523362 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4fb45d4-a64d-4e42-86b5-9e3924f0f877","Type":"ContainerStarted","Data":"2d6cac04b94abd198ddc9f056233f15ee32d343ba2114d80269dbe12097e3669"} Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.611679 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.613918 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.616123 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.630145 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720594 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720796 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720825 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720847 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720915 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.720953 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.822792 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.822983 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823015 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823067 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823139 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.823196 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824634 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824705 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824734 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.824749 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.855908 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"dnsmasq-dns-6cfddb65-w5qxr\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:49 crc kubenswrapper[4902]: I0121 16:20:49.937109 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.309876 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11823665-4fce-4950-a6d3-bc34bafbc01d" path="/var/lib/kubelet/pods/11823665-4fce-4950-a6d3-bc34bafbc01d/volumes" Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.535940 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"dd134312a8e6d13b94423c2b0f77109d171de479dad512d546b77e5a94340278"} Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.540885 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23" exitCode=0 Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.540940 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23"} Jan 21 16:20:50 crc kubenswrapper[4902]: I0121 16:20:50.552139 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:20:50 crc kubenswrapper[4902]: W0121 16:20:50.592269 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a6d8ddd_b000_4d99_a48a_394c9b673d67.slice/crio-861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40 WatchSource:0}: Error finding container 861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40: Status 404 returned error can't find the container with id 861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40 Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.554595 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerID="9bddc8d1320be1ebe8e83f1e5a8405452ed01e5c87ac037e91ffd39c0dec9810" exitCode=0 Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.555084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerDied","Data":"9bddc8d1320be1ebe8e83f1e5a8405452ed01e5c87ac037e91ffd39c0dec9810"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.555111 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerStarted","Data":"861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.564081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"de87b2378d70a3b0fe8a6ee1a75d737943529d10890ad31f6bf3b5bbb1222f4e"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.567605 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c4fb45d4-a64d-4e42-86b5-9e3924f0f877","Type":"ContainerStarted","Data":"fb40b5c4780738fe4caf73638dac1ee89b017da961e27171f3fa517cf1c6e91b"} Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.582359 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:20:51 crc kubenswrapper[4902]: I0121 16:20:51.615794 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.025719205 podStartE2EDuration="3.615709707s" podCreationTimestamp="2026-01-21 16:20:48 +0000 UTC" firstStartedPulling="2026-01-21 16:20:49.410988796 +0000 UTC m=+6411.487821825" lastFinishedPulling="2026-01-21 16:20:51.000979288 +0000 UTC m=+6413.077812327" observedRunningTime="2026-01-21 16:20:51.598487113 +0000 UTC m=+6413.675320162" watchObservedRunningTime="2026-01-21 16:20:51.615709707 +0000 UTC m=+6413.692542756" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.603417 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"da0893d4-ad82-4a00-8ccf-5e33ead4d85d","Type":"ContainerStarted","Data":"258ead796100088e593345257501311f4b8fdf6493f496733ef79b978d12e809"} Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.615389 4902 generic.go:334] "Generic (PLEG): container finished" podID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerID="a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73" exitCode=0 Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.615487 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73"} Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.617826 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerStarted","Data":"eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f"} Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.631820 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.871327239 podStartE2EDuration="6.631800301s" podCreationTimestamp="2026-01-21 16:20:46 +0000 UTC" firstStartedPulling="2026-01-21 16:20:47.384142269 +0000 UTC m=+6409.460975298" lastFinishedPulling="2026-01-21 16:20:52.144615331 +0000 UTC m=+6414.221448360" observedRunningTime="2026-01-21 16:20:52.625888834 +0000 UTC m=+6414.702721863" watchObservedRunningTime="2026-01-21 16:20:52.631800301 +0000 UTC m=+6414.708633320" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.680056 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" podStartSLOduration=3.680024738 podStartE2EDuration="3.680024738s" podCreationTimestamp="2026-01-21 16:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:20:52.662158485 +0000 UTC m=+6414.738991524" watchObservedRunningTime="2026-01-21 16:20:52.680024738 +0000 UTC m=+6414.756857767" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.885008 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899582 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899630 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899680 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899707 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899788 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.899914 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") pod \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\" (UID: \"c4f95e4f-1f5c-4664-91c4-8c904bbac588\") " Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.901544 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.901950 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.913188 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts" (OuterVolumeSpecName: "scripts") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.913331 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz" (OuterVolumeSpecName: "kube-api-access-67rfz") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "kube-api-access-67rfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:52 crc kubenswrapper[4902]: I0121 16:20:52.968229 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002075 4902 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002111 4902 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002122 4902 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4f95e4f-1f5c-4664-91c4-8c904bbac588-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002130 4902 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.002141 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rfz\" (UniqueName: \"kubernetes.io/projected/c4f95e4f-1f5c-4664-91c4-8c904bbac588-kube-api-access-67rfz\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.024876 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.043245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data" (OuterVolumeSpecName: "config-data") pod "c4f95e4f-1f5c-4664-91c4-8c904bbac588" (UID: "c4f95e4f-1f5c-4664-91c4-8c904bbac588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.104431 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.104474 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f95e4f-1f5c-4664-91c4-8c904bbac588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.630512 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4f95e4f-1f5c-4664-91c4-8c904bbac588","Type":"ContainerDied","Data":"6a0ffc37c1dc6797f40e78442f47022b5947c62404a4648910e4832c3ca3e7c8"} Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.630583 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.630587 4902 scope.go:117] "RemoveContainer" containerID="5caee1232afcd9c4c5a8dc77f05596a1ad92560cc083b64beb6fa8f0cbb3b5ef" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.631844 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.663012 4902 scope.go:117] "RemoveContainer" containerID="7fe3c3c2d648b699f01ffcd94a87024e19a4dd935851ffcf7e84ba63d370a012" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.673952 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.687872 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.690913 4902 scope.go:117] "RemoveContainer" containerID="a8f0823ba1c0a5684f79d4de9630d7b181fe3de6041c572de1b9d8bd941a0b73" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.707547 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708059 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708081 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708111 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708120 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708143 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708151 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" Jan 21 16:20:53 crc kubenswrapper[4902]: E0121 16:20:53.708178 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708186 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708423 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-central-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708450 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="ceilometer-notification-agent" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708459 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="sg-core" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.708480 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" containerName="proxy-httpd" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.710530 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.719924 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-scripts\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720122 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-run-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720268 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfzc\" (UniqueName: \"kubernetes.io/projected/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-kube-api-access-fxfzc\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720412 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720431 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720615 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720731 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720736 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.720918 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.721152 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-log-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.721295 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-config-data\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.721611 4902 scope.go:117] "RemoveContainer" containerID="75fba54a577984f27ac176f357e52d94c5cdfda78ed3b9823956d8f8f3c0ca23" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.733322 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-log-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-config-data\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823428 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-scripts\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823460 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-run-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823497 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfzc\" (UniqueName: \"kubernetes.io/projected/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-kube-api-access-fxfzc\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823529 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823559 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.823587 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.829070 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.829371 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-log-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.832760 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-config-data\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.832818 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-run-httpd\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.835469 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-scripts\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.836664 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.837407 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:53 crc kubenswrapper[4902]: I0121 16:20:53.849882 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfzc\" (UniqueName: \"kubernetes.io/projected/57c9a2f0-4583-4438-b35f-f92aa9a7efe8-kube-api-access-fxfzc\") pod \"ceilometer-0\" (UID: \"57c9a2f0-4583-4438-b35f-f92aa9a7efe8\") " pod="openstack/ceilometer-0" Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.042942 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.308070 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f95e4f-1f5c-4664-91c4-8c904bbac588" path="/var/lib/kubelet/pods/c4f95e4f-1f5c-4664-91c4-8c904bbac588/volumes" Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.538422 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:20:54 crc kubenswrapper[4902]: I0121 16:20:54.640512 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"80650a1cb535f1364497f61a972bf62bf38dfe62ce6816554930d1dc44e55ac6"} Jan 21 16:20:56 crc kubenswrapper[4902]: I0121 16:20:56.663620 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"275286b6050db6af2bddfc616e19826798b370ed60cafa1df32c7ce30574461e"} Jan 21 16:20:57 crc kubenswrapper[4902]: I0121 16:20:57.675618 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"620a949064b4846ca7bf499ce55f7ea0f8524126db27987486422027d701a448"} Jan 21 16:20:58 crc kubenswrapper[4902]: I0121 16:20:58.686216 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"bf2bae481d78afd4a1c6ad1175134042339026c121d475e917c5d3605e92cc1c"} Jan 21 16:20:58 crc kubenswrapper[4902]: I0121 16:20:58.899569 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.698459 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57c9a2f0-4583-4438-b35f-f92aa9a7efe8","Type":"ContainerStarted","Data":"f7f13f07105599c6839260edb92a502a27890441ab8132ce835a2dd8d0fb2803"} Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.698990 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.727768 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9860646100000001 podStartE2EDuration="6.727744413s" podCreationTimestamp="2026-01-21 16:20:53 +0000 UTC" firstStartedPulling="2026-01-21 16:20:54.542484248 +0000 UTC m=+6416.619317277" lastFinishedPulling="2026-01-21 16:20:59.284164051 +0000 UTC m=+6421.360997080" observedRunningTime="2026-01-21 16:20:59.719152302 +0000 UTC m=+6421.795985331" watchObservedRunningTime="2026-01-21 16:20:59.727744413 +0000 UTC m=+6421.804577442" Jan 21 16:20:59 crc kubenswrapper[4902]: I0121 16:20:59.939070 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.025088 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.025718 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" containerID="cri-o://baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1" gracePeriod=10 Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.062778 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.083203 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.101461 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.120642 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8csjv"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.133518 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3bb8-account-create-update-k967z"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.148237 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-974x9"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.179793 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4c775f77-hlsqd"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.182298 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.215090 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4c775f77-hlsqd"] Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314462 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc7n\" (UniqueName: \"kubernetes.io/projected/45e057f7-f682-43f2-a02c-effad070763f-kube-api-access-chc7n\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314521 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-openstack-cell1\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314571 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-dns-svc\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.314596 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-nb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.315100 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5963807a-fc48-485b-a3a5-7b07791dfdd0" path="/var/lib/kubelet/pods/5963807a-fc48-485b-a3a5-7b07791dfdd0/volumes" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.315412 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-sb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.315455 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-config\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.317432 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c847ba2-4e65-4677-b8b6-514162b0c1bc" path="/var/lib/kubelet/pods/9c847ba2-4e65-4677-b8b6-514162b0c1bc/volumes" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.318717 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe639d2-1844-47b8-b4c8-3b602547070a" path="/var/lib/kubelet/pods/fbe639d2-1844-47b8-b4c8-3b602547070a/volumes" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.339556 4902 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.88:5353: connect: connection refused" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.533372 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chc7n\" (UniqueName: \"kubernetes.io/projected/45e057f7-f682-43f2-a02c-effad070763f-kube-api-access-chc7n\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.533431 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-openstack-cell1\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.533494 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-dns-svc\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534022 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-nb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534078 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-sb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-config\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534324 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-openstack-cell1\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534340 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-dns-svc\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-nb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.534953 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-config\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.535032 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45e057f7-f682-43f2-a02c-effad070763f-ovsdbserver-sb\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.571283 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chc7n\" (UniqueName: \"kubernetes.io/projected/45e057f7-f682-43f2-a02c-effad070763f-kube-api-access-chc7n\") pod \"dnsmasq-dns-f4c775f77-hlsqd\" (UID: \"45e057f7-f682-43f2-a02c-effad070763f\") " pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.571883 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.757965 4902 generic.go:334] "Generic (PLEG): container finished" podID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerID="baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1" exitCode=0 Jan 21 16:21:00 crc kubenswrapper[4902]: I0121 16:21:00.758338 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerDied","Data":"baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1"} Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.056355 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.067097 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.079097 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6d31-account-create-update-v52m2"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.089407 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9kql9"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.098947 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.108818 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4fdb-account-create-update-4c46m"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.245766 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.255714 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.255810 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.256522 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.256649 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.256807 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") pod \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\" (UID: \"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7\") " Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.263472 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2" (OuterVolumeSpecName: "kube-api-access-h2vm2") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "kube-api-access-h2vm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.327333 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.329497 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.348924 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360411 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360443 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360456 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2vm2\" (UniqueName: \"kubernetes.io/projected/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-kube-api-access-h2vm2\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.360469 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.390444 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config" (OuterVolumeSpecName: "config") pod "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" (UID: "e49061e8-8daf-4a22-b1f0-4241f2b1c9c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.463420 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.514605 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4c775f77-hlsqd"] Jan 21 16:21:01 crc kubenswrapper[4902]: W0121 16:21:01.517219 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e057f7_f682_43f2_a02c_effad070763f.slice/crio-967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6 WatchSource:0}: Error finding container 967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6: Status 404 returned error can't find the container with id 967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6 Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.847444 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" event={"ID":"45e057f7-f682-43f2-a02c-effad070763f","Type":"ContainerStarted","Data":"967b3c6d41da05ee645c5169bf9065ab85463e55df1a04e9a7536c3d2ee1dff6"} Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.851601 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" event={"ID":"e49061e8-8daf-4a22-b1f0-4241f2b1c9c7","Type":"ContainerDied","Data":"38677ca61f06b9260ed5f983f8682c334bd87743eff5be88bd87e6a5090aa3da"} Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.851654 4902 scope.go:117] "RemoveContainer" containerID="baf3c482643b3ef05bb015530d7c001d912cf37cabd28f9882b045c54788e7f1" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.851985 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7b5475f9-g5lzz" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.892219 4902 scope.go:117] "RemoveContainer" containerID="e76932770c6254b11b917bc645b83b0c1aaf28ee17d431c3d586506bef4ab067" Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.896395 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:21:01 crc kubenswrapper[4902]: I0121 16:21:01.920227 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7b5475f9-g5lzz"] Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.375082 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58" path="/var/lib/kubelet/pods/bbdd6fbd-8aa5-4b8f-8b58-a3f7061edc58/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.378299 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172" path="/var/lib/kubelet/pods/ccfbdaf2-99ab-45ac-b7d5-0b4aa150c172/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.378959 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" path="/var/lib/kubelet/pods/e49061e8-8daf-4a22-b1f0-4241f2b1c9c7/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.379805 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f58498-29bd-47d8-8af1-ac98b4a9f510" path="/var/lib/kubelet/pods/e4f58498-29bd-47d8-8af1-ac98b4a9f510/volumes" Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.863630 4902 generic.go:334] "Generic (PLEG): container finished" podID="45e057f7-f682-43f2-a02c-effad070763f" containerID="8e9e09464d5cd039c9442390e76b3f6f970a3878cbf93d936fdfb98fc79ed667" exitCode=0 Jan 21 16:21:02 crc kubenswrapper[4902]: I0121 16:21:02.863800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" event={"ID":"45e057f7-f682-43f2-a02c-effad070763f","Type":"ContainerDied","Data":"8e9e09464d5cd039c9442390e76b3f6f970a3878cbf93d936fdfb98fc79ed667"} Jan 21 16:21:03 crc kubenswrapper[4902]: I0121 16:21:03.873944 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" event={"ID":"45e057f7-f682-43f2-a02c-effad070763f","Type":"ContainerStarted","Data":"71d693e87447205212c9352b002dc8c72d551deb1c29459ab33dc7d5f2feb14f"} Jan 21 16:21:03 crc kubenswrapper[4902]: I0121 16:21:03.874255 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.574203 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.598512 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4c775f77-hlsqd" podStartSLOduration=10.59849317 podStartE2EDuration="10.59849317s" podCreationTimestamp="2026-01-21 16:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:21:03.901180606 +0000 UTC m=+6425.978013635" watchObservedRunningTime="2026-01-21 16:21:10.59849317 +0000 UTC m=+6432.675326199" Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.673468 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.673751 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" containerID="cri-o://eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f" gracePeriod=10 Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.955593 4902 generic.go:334] "Generic (PLEG): container finished" podID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerID="eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f" exitCode=0 Jan 21 16:21:10 crc kubenswrapper[4902]: I0121 16:21:10.955652 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerDied","Data":"eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f"} Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.047996 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.061383 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zr8nj"] Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.242676 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.279878 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280027 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280087 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280197 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280694 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.280740 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.328230 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc" (OuterVolumeSpecName: "kube-api-access-28xdc") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "kube-api-access-28xdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.381822 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.382442 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") pod \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\" (UID: \"1a6d8ddd-b000-4d99-a48a-394c9b673d67\") " Jan 21 16:21:11 crc kubenswrapper[4902]: W0121 16:21:11.382648 4902 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1a6d8ddd-b000-4d99-a48a-394c9b673d67/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.382669 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.387745 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28xdc\" (UniqueName: \"kubernetes.io/projected/1a6d8ddd-b000-4d99-a48a-394c9b673d67-kube-api-access-28xdc\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.388061 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.397426 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.398713 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config" (OuterVolumeSpecName: "config") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.400812 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.401192 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "1a6d8ddd-b000-4d99-a48a-394c9b673d67" (UID: "1a6d8ddd-b000-4d99-a48a-394c9b673d67"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491030 4902 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491082 4902 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491092 4902 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.491122 4902 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/1a6d8ddd-b000-4d99-a48a-394c9b673d67-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.968695 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" event={"ID":"1a6d8ddd-b000-4d99-a48a-394c9b673d67","Type":"ContainerDied","Data":"861b75366a847a792132019b5bf5301f2b69db15d74e5a937b6410bd67658d40"} Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.968757 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cfddb65-w5qxr" Jan 21 16:21:11 crc kubenswrapper[4902]: I0121 16:21:11.968755 4902 scope.go:117] "RemoveContainer" containerID="eb92db1dc47cbf0d9a02655148d3afd612fd5dad1b60c90dd453bb59f7ad2d9f" Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.005876 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.010440 4902 scope.go:117] "RemoveContainer" containerID="9bddc8d1320be1ebe8e83f1e5a8405452ed01e5c87ac037e91ffd39c0dec9810" Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.018143 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cfddb65-w5qxr"] Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.307818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" path="/var/lib/kubelet/pods/1a6d8ddd-b000-4d99-a48a-394c9b673d67/volumes" Jan 21 16:21:12 crc kubenswrapper[4902]: I0121 16:21:12.308807 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e6442c-e6fd-498e-b20d-e994574644ea" path="/var/lib/kubelet/pods/76e6442c-e6fd-498e-b20d-e994574644ea/volumes" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.599183 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q"] Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600208 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600229 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600273 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600281 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600305 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600312 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: E0121 16:21:20.600327 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600335 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="init" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600592 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a6d8ddd-b000-4d99-a48a-394c9b673d67" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.600620 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e49061e8-8daf-4a22-b1f0-4241f2b1c9c7" containerName="dnsmasq-dns" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.601604 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.605578 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.605651 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.605895 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.606145 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.618527 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q"] Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719186 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719234 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719418 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.719467 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821075 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821163 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821213 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.821239 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.827675 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.828325 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.828583 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.836579 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:20 crc kubenswrapper[4902]: I0121 16:21:20.928563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:21 crc kubenswrapper[4902]: I0121 16:21:21.646335 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q"] Jan 21 16:21:22 crc kubenswrapper[4902]: I0121 16:21:22.084617 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerStarted","Data":"1034a61bf078784c944c93b937eae597f0c63c5b54d928588db8926f39a5574c"} Jan 21 16:21:24 crc kubenswrapper[4902]: I0121 16:21:24.066652 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:21:28 crc kubenswrapper[4902]: I0121 16:21:28.450596 4902 scope.go:117] "RemoveContainer" containerID="99ee9f7749f725c9768c807df30815b54542175e3f04ac09d8600799af1e8a19" Jan 21 16:21:30 crc kubenswrapper[4902]: I0121 16:21:30.040297 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:21:30 crc kubenswrapper[4902]: I0121 16:21:30.049917 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mqjfk"] Jan 21 16:21:30 crc kubenswrapper[4902]: I0121 16:21:30.498112 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fafbdf5-1100-4f6f-831e-c7dd0fc63586" path="/var/lib/kubelet/pods/6fafbdf5-1100-4f6f-831e-c7dd0fc63586/volumes" Jan 21 16:21:31 crc kubenswrapper[4902]: I0121 16:21:31.035169 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:21:31 crc kubenswrapper[4902]: I0121 16:21:31.048345 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7ld7m"] Jan 21 16:21:32 crc kubenswrapper[4902]: I0121 16:21:32.349601 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beebb97d-c56a-4c7d-8ec0-f9982f9c2e32" path="/var/lib/kubelet/pods/beebb97d-c56a-4c7d-8ec0-f9982f9c2e32/volumes" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.074828 4902 scope.go:117] "RemoveContainer" containerID="7e054620420f286eb319ea74bdca60ca0a6e43b9d52a5c4ad7043b88a7a02929" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.137052 4902 scope.go:117] "RemoveContainer" containerID="432f7ea37f3132bc52dfdced9ef97fb63c40a136694ea136586f2dee4c4a42b9" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.198772 4902 scope.go:117] "RemoveContainer" containerID="e2e258a3a1605851e7cb0ee36afe37bb54f98c9526d53b997a37f6c2cacd6192" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.237802 4902 scope.go:117] "RemoveContainer" containerID="f7c278e1da3c54353778da6f63a10b5d381146af280b9714be7ae6c71d2e3772" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.298246 4902 scope.go:117] "RemoveContainer" containerID="889fe026bf2a7b74189409dad70c2684f40ab43f381e9a39094266539161c3b9" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.444303 4902 scope.go:117] "RemoveContainer" containerID="1b0ff0cc281058854299a37c0eae467595b367d385ca015e5d0368dda142849e" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.464977 4902 scope.go:117] "RemoveContainer" containerID="9e04cfcc3e9b81819b9ca08bf91b4f4038827b55094f93cb2cd3586ac9a3d537" Jan 21 16:21:33 crc kubenswrapper[4902]: I0121 16:21:33.497205 4902 scope.go:117] "RemoveContainer" containerID="a9669cf760ec41fe8c9ac56172de1dfc2733858ea7763d6ffbfc15c535c182ce" Jan 21 16:21:34 crc kubenswrapper[4902]: I0121 16:21:34.280011 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerStarted","Data":"fe1f700c2f757bf5a46cbc09cb60490bbf221b2b62b3b42652e0e3b68bcf0dd9"} Jan 21 16:21:34 crc kubenswrapper[4902]: I0121 16:21:34.308938 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" podStartSLOduration=2.576475875 podStartE2EDuration="14.308905349s" podCreationTimestamp="2026-01-21 16:21:20 +0000 UTC" firstStartedPulling="2026-01-21 16:21:21.73515291 +0000 UTC m=+6443.811985939" lastFinishedPulling="2026-01-21 16:21:33.467582384 +0000 UTC m=+6455.544415413" observedRunningTime="2026-01-21 16:21:34.3050221 +0000 UTC m=+6456.381855189" watchObservedRunningTime="2026-01-21 16:21:34.308905349 +0000 UTC m=+6456.385738428" Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.432972 4902 generic.go:334] "Generic (PLEG): container finished" podID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerID="fe1f700c2f757bf5a46cbc09cb60490bbf221b2b62b3b42652e0e3b68bcf0dd9" exitCode=0 Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.433075 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerDied","Data":"fe1f700c2f757bf5a46cbc09cb60490bbf221b2b62b3b42652e0e3b68bcf0dd9"} Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.770566 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:21:47 crc kubenswrapper[4902]: I0121 16:21:47.770656 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:21:48 crc kubenswrapper[4902]: I0121 16:21:48.897866 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002349 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002821 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002903 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.002945 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") pod \"8b723bd7-4449-4516-bcc6-9d57d981fbda\" (UID: \"8b723bd7-4449-4516-bcc6-9d57d981fbda\") " Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.009641 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf" (OuterVolumeSpecName: "kube-api-access-dk4sf") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "kube-api-access-dk4sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.009632 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.033362 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory" (OuterVolumeSpecName: "inventory") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.041993 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8b723bd7-4449-4516-bcc6-9d57d981fbda" (UID: "8b723bd7-4449-4516-bcc6-9d57d981fbda"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.070889 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.086906 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qd5pv"] Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106113 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106175 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106187 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4sf\" (UniqueName: \"kubernetes.io/projected/8b723bd7-4449-4516-bcc6-9d57d981fbda-kube-api-access-dk4sf\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.106228 4902 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b723bd7-4449-4516-bcc6-9d57d981fbda-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.458967 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" event={"ID":"8b723bd7-4449-4516-bcc6-9d57d981fbda","Type":"ContainerDied","Data":"1034a61bf078784c944c93b937eae597f0c63c5b54d928588db8926f39a5574c"} Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.459010 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1034a61bf078784c944c93b937eae597f0c63c5b54d928588db8926f39a5574c" Jan 21 16:21:49 crc kubenswrapper[4902]: I0121 16:21:49.459089 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q" Jan 21 16:21:50 crc kubenswrapper[4902]: I0121 16:21:50.306848 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c2e205-1cb6-4b63-89d5-c03370d5cb02" path="/var/lib/kubelet/pods/87c2e205-1cb6-4b63-89d5-c03370d5cb02/volumes" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.362385 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c"] Jan 21 16:21:53 crc kubenswrapper[4902]: E0121 16:21:53.363435 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.363456 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.363775 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b723bd7-4449-4516-bcc6-9d57d981fbda" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.364772 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.370299 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.370668 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.370869 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.371035 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.400632 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c"] Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.409738 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.409813 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.409845 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.410003 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.511816 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.511898 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.511939 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.512186 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.518819 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.520532 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.521742 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.529303 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:53 crc kubenswrapper[4902]: I0121 16:21:53.689483 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:21:54 crc kubenswrapper[4902]: I0121 16:21:54.315084 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c"] Jan 21 16:21:54 crc kubenswrapper[4902]: I0121 16:21:54.505542 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerStarted","Data":"d937f1a62ac88d359e95c410ee456b4680107ca512a37ba97d0e11eaf1bd08e7"} Jan 21 16:21:55 crc kubenswrapper[4902]: I0121 16:21:55.529451 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerStarted","Data":"b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb"} Jan 21 16:21:55 crc kubenswrapper[4902]: I0121 16:21:55.554946 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" podStartSLOduration=1.877800626 podStartE2EDuration="2.55492171s" podCreationTimestamp="2026-01-21 16:21:53 +0000 UTC" firstStartedPulling="2026-01-21 16:21:54.29943043 +0000 UTC m=+6476.376263459" lastFinishedPulling="2026-01-21 16:21:54.976551514 +0000 UTC m=+6477.053384543" observedRunningTime="2026-01-21 16:21:55.545830524 +0000 UTC m=+6477.622663553" watchObservedRunningTime="2026-01-21 16:21:55.55492171 +0000 UTC m=+6477.631754729" Jan 21 16:22:17 crc kubenswrapper[4902]: I0121 16:22:17.769649 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:17 crc kubenswrapper[4902]: I0121 16:22:17.770298 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.797515 4902 scope.go:117] "RemoveContainer" containerID="4c0781a19d8b6c48f488f12d1b70c08865b23377c77473fa64a8a1663801f2cb" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.849839 4902 scope.go:117] "RemoveContainer" containerID="d6b0eff18372cd37ddfe92c986fb4a923d9dbd3f107869f09f0fdbe8e2eaaa5c" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.906545 4902 scope.go:117] "RemoveContainer" containerID="2c69e68e7d02d1de6bf68e1e65e17ee7498b6d1191ba5efd74e3f15243d799ed" Jan 21 16:22:33 crc kubenswrapper[4902]: I0121 16:22:33.968729 4902 scope.go:117] "RemoveContainer" containerID="dd9c814774718de26b2a6f5f159c980f718ec5bd198d471d2426d82a67f32ddd" Jan 21 16:22:34 crc kubenswrapper[4902]: I0121 16:22:34.011818 4902 scope.go:117] "RemoveContainer" containerID="0f5fdb1f77ee5e53923e8edceba05628177b711a2533fe02fb33769c82576bcf" Jan 21 16:22:34 crc kubenswrapper[4902]: I0121 16:22:34.043268 4902 scope.go:117] "RemoveContainer" containerID="b5b92e7f1cc27fed5221f05667fdb25b332ac410148a8012346660a03a7b0fdf" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.769572 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.770195 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.770251 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.772386 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:22:47 crc kubenswrapper[4902]: I0121 16:22:47.772458 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce" gracePeriod=600 Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182158 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce" exitCode=0 Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce"} Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182508 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8"} Jan 21 16:22:48 crc kubenswrapper[4902]: I0121 16:22:48.182540 4902 scope.go:117] "RemoveContainer" containerID="a0d88bceaa2fc6b4218ed80a0e76761167d2a6c22cfaeb1d2f973ba49e1a46ac" Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.049896 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.084609 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.095599 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ae8b-account-create-update-q86xl"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.107443 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-5zjhh"] Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.316818 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1baaefdd-ea47-4ac0-98d0-d370180b0eb0" path="/var/lib/kubelet/pods/1baaefdd-ea47-4ac0-98d0-d370180b0eb0/volumes" Jan 21 16:23:22 crc kubenswrapper[4902]: I0121 16:23:22.318262 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507bf37f-b9da-4064-970b-89f9a27589fe" path="/var/lib/kubelet/pods/507bf37f-b9da-4064-970b-89f9a27589fe/volumes" Jan 21 16:23:28 crc kubenswrapper[4902]: I0121 16:23:28.059204 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:23:28 crc kubenswrapper[4902]: I0121 16:23:28.082608 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-q8nvb"] Jan 21 16:23:28 crc kubenswrapper[4902]: I0121 16:23:28.308931 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a4e549-a509-40db-8756-e37432024793" path="/var/lib/kubelet/pods/f4a4e549-a509-40db-8756-e37432024793/volumes" Jan 21 16:23:30 crc kubenswrapper[4902]: I0121 16:23:30.027471 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:23:30 crc kubenswrapper[4902]: I0121 16:23:30.037447 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-d1a6-account-create-update-cw969"] Jan 21 16:23:30 crc kubenswrapper[4902]: I0121 16:23:30.306768 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502e21f3-ea57-4f04-8e23-9b45c7a07ca2" path="/var/lib/kubelet/pods/502e21f3-ea57-4f04-8e23-9b45c7a07ca2/volumes" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.318998 4902 scope.go:117] "RemoveContainer" containerID="c51c67b3d5eb2547d5d118a566d05127b5232bbe2fa2468af4680ad00279aa48" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.351537 4902 scope.go:117] "RemoveContainer" containerID="26b51b45f191ff662cf71fe75dfa0a28808489ff71c63772b28558abe727c5a5" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.385459 4902 scope.go:117] "RemoveContainer" containerID="a5edfafdeacf21f426cc5bd6281b1cd868d12717fac023895ab55ea3fbcafc1e" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.432600 4902 scope.go:117] "RemoveContainer" containerID="bbd0af7b0e6a302b723bb3848d085087ea3fb23417c8175750a0c41598fe534f" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.496954 4902 scope.go:117] "RemoveContainer" containerID="5cf0b5bdbf01f12d44cd41471171a9c5244aec958a6477fc8835553eabc2f3b6" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.568184 4902 scope.go:117] "RemoveContainer" containerID="69c36b0bae9178724a6d97de46722cf5b0cc80d59e4ce8f2e0554584489171d5" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.620190 4902 scope.go:117] "RemoveContainer" containerID="7bdedfb5108f3ffecf10a0859392a7cf8d5159f213fdc4909c0c06024f91b0c1" Jan 21 16:23:34 crc kubenswrapper[4902]: I0121 16:23:34.644321 4902 scope.go:117] "RemoveContainer" containerID="9ceea852acb3ca8b99175935197b72276107562be97cda3fb8e5495a3f66a192" Jan 21 16:24:12 crc kubenswrapper[4902]: I0121 16:24:12.043730 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:24:12 crc kubenswrapper[4902]: I0121 16:24:12.052179 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-vrr2k"] Jan 21 16:24:12 crc kubenswrapper[4902]: I0121 16:24:12.309256 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a6ab47-0bbe-428a-82f5-478fc4c52e8a" path="/var/lib/kubelet/pods/60a6ab47-0bbe-428a-82f5-478fc4c52e8a/volumes" Jan 21 16:24:34 crc kubenswrapper[4902]: I0121 16:24:34.839392 4902 scope.go:117] "RemoveContainer" containerID="abc9a540052a00b1952e4ccbff28d0fd5e66b03f552886a2028474527bd5343e" Jan 21 16:24:34 crc kubenswrapper[4902]: I0121 16:24:34.872840 4902 scope.go:117] "RemoveContainer" containerID="f4cdf18149c84ac20ab00cae2362d90191fa45e99a1761f8508af240e2f326b6" Jan 21 16:25:17 crc kubenswrapper[4902]: I0121 16:25:17.769596 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:17 crc kubenswrapper[4902]: I0121 16:25:17.770552 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:47 crc kubenswrapper[4902]: I0121 16:25:47.769946 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:47 crc kubenswrapper[4902]: I0121 16:25:47.770632 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.770125 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.770818 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.771014 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.772735 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:26:17 crc kubenswrapper[4902]: I0121 16:26:17.772876 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" gracePeriod=600 Jan 21 16:26:17 crc kubenswrapper[4902]: E0121 16:26:17.913811 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.157936 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" exitCode=0 Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.158010 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8"} Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.158342 4902 scope.go:117] "RemoveContainer" containerID="dd9f943d521b68000af79f0fd73624ba084fada704e30191659b3cc0a8066bce" Jan 21 16:26:18 crc kubenswrapper[4902]: I0121 16:26:18.161996 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:18 crc kubenswrapper[4902]: E0121 16:26:18.162641 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:29 crc kubenswrapper[4902]: I0121 16:26:29.295481 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:29 crc kubenswrapper[4902]: E0121 16:26:29.296329 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:41 crc kubenswrapper[4902]: I0121 16:26:41.295535 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:41 crc kubenswrapper[4902]: E0121 16:26:41.296773 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.830382 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.833345 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.853034 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.964522 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.965101 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:49 crc kubenswrapper[4902]: I0121 16:26:49.965687 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.068215 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.068461 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.068538 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.069130 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.069149 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.095145 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"redhat-marketplace-6smhb\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.161077 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:26:50 crc kubenswrapper[4902]: I0121 16:26:50.673949 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.512339 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerID="32cdb44674e6374f766b65eaed6a61b60758360dd1e8e594ab7a3baf4d914d87" exitCode=0 Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.512462 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"32cdb44674e6374f766b65eaed6a61b60758360dd1e8e594ab7a3baf4d914d87"} Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.512584 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerStarted","Data":"26557bb8550a70ceb69c02f48276580205f7ed70b0b2b78a7ba9c236ae6b41de"} Jan 21 16:26:51 crc kubenswrapper[4902]: I0121 16:26:51.514832 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:26:53 crc kubenswrapper[4902]: I0121 16:26:53.542431 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerID="04aced0c0b567c17119cd21528fe883b24627e7fda15f96134eacb5302158c50" exitCode=0 Jan 21 16:26:53 crc kubenswrapper[4902]: I0121 16:26:53.542480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"04aced0c0b567c17119cd21528fe883b24627e7fda15f96134eacb5302158c50"} Jan 21 16:26:54 crc kubenswrapper[4902]: I0121 16:26:54.556497 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerStarted","Data":"b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b"} Jan 21 16:26:54 crc kubenswrapper[4902]: I0121 16:26:54.582526 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6smhb" podStartSLOduration=2.859371059 podStartE2EDuration="5.58249938s" podCreationTimestamp="2026-01-21 16:26:49 +0000 UTC" firstStartedPulling="2026-01-21 16:26:51.514559285 +0000 UTC m=+6773.591392314" lastFinishedPulling="2026-01-21 16:26:54.237687596 +0000 UTC m=+6776.314520635" observedRunningTime="2026-01-21 16:26:54.575765011 +0000 UTC m=+6776.652598080" watchObservedRunningTime="2026-01-21 16:26:54.58249938 +0000 UTC m=+6776.659332449" Jan 21 16:26:55 crc kubenswrapper[4902]: I0121 16:26:55.295382 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:26:55 crc kubenswrapper[4902]: E0121 16:26:55.295690 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.162379 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.163176 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.212302 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.684465 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:00 crc kubenswrapper[4902]: I0121 16:27:00.739359 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:27:02 crc kubenswrapper[4902]: I0121 16:27:02.652204 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6smhb" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" containerID="cri-o://b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b" gracePeriod=2 Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.041777 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.050410 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-bjrq8"] Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665100 4902 generic.go:334] "Generic (PLEG): container finished" podID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerID="b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b" exitCode=0 Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665167 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b"} Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665382 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6smhb" event={"ID":"e1d2f8ef-0175-4070-881a-825ccd1219b8","Type":"ContainerDied","Data":"26557bb8550a70ceb69c02f48276580205f7ed70b0b2b78a7ba9c236ae6b41de"} Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.665423 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26557bb8550a70ceb69c02f48276580205f7ed70b0b2b78a7ba9c236ae6b41de" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.694290 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.810436 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") pod \"e1d2f8ef-0175-4070-881a-825ccd1219b8\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.810619 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") pod \"e1d2f8ef-0175-4070-881a-825ccd1219b8\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.810640 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") pod \"e1d2f8ef-0175-4070-881a-825ccd1219b8\" (UID: \"e1d2f8ef-0175-4070-881a-825ccd1219b8\") " Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.811554 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities" (OuterVolumeSpecName: "utilities") pod "e1d2f8ef-0175-4070-881a-825ccd1219b8" (UID: "e1d2f8ef-0175-4070-881a-825ccd1219b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.816292 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd" (OuterVolumeSpecName: "kube-api-access-fctfd") pod "e1d2f8ef-0175-4070-881a-825ccd1219b8" (UID: "e1d2f8ef-0175-4070-881a-825ccd1219b8"). InnerVolumeSpecName "kube-api-access-fctfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.833411 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1d2f8ef-0175-4070-881a-825ccd1219b8" (UID: "e1d2f8ef-0175-4070-881a-825ccd1219b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.913203 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.913587 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2f8ef-0175-4070-881a-825ccd1219b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:03 crc kubenswrapper[4902]: I0121 16:27:03.913601 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fctfd\" (UniqueName: \"kubernetes.io/projected/e1d2f8ef-0175-4070-881a-825ccd1219b8-kube-api-access-fctfd\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.035382 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.044363 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-11d1-account-create-update-c7r42"] Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.307586 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c" path="/var/lib/kubelet/pods/217952b8-c6e3-44ba-b5f2-dabc3dfa9b1c/volumes" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.308260 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9e17b7-5e08-4042-9b1b-ccad64651eef" path="/var/lib/kubelet/pods/ff9e17b7-5e08-4042-9b1b-ccad64651eef/volumes" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.683411 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6smhb" Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.721815 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:27:04 crc kubenswrapper[4902]: I0121 16:27:04.736993 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6smhb"] Jan 21 16:27:06 crc kubenswrapper[4902]: I0121 16:27:06.308203 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" path="/var/lib/kubelet/pods/e1d2f8ef-0175-4070-881a-825ccd1219b8/volumes" Jan 21 16:27:07 crc kubenswrapper[4902]: I0121 16:27:07.294685 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:07 crc kubenswrapper[4902]: E0121 16:27:07.295258 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.714634 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:09 crc kubenswrapper[4902]: E0121 16:27:09.715738 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.715770 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" Jan 21 16:27:09 crc kubenswrapper[4902]: E0121 16:27:09.715800 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-utilities" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.715808 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-utilities" Jan 21 16:27:09 crc kubenswrapper[4902]: E0121 16:27:09.715829 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-content" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.715836 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="extract-content" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.716144 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d2f8ef-0175-4070-881a-825ccd1219b8" containerName="registry-server" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.718266 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.731825 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.737593 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.737711 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.737946 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.840852 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841364 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841393 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841521 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.841798 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:09 crc kubenswrapper[4902]: I0121 16:27:09.862245 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"certified-operators-rdqrp\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.043538 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.505338 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.756594 4902 generic.go:334] "Generic (PLEG): container finished" podID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerID="ecc4d1b7ad6d3c3e3e91d4bd9e4657053e105bd206863129b0c9caecb3844760" exitCode=0 Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.756692 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"ecc4d1b7ad6d3c3e3e91d4bd9e4657053e105bd206863129b0c9caecb3844760"} Jan 21 16:27:10 crc kubenswrapper[4902]: I0121 16:27:10.757152 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerStarted","Data":"d7bcc8cece54e32e70746c7d4793c0eec0b3c6c2bff3c7a2c469217cd9ee806c"} Jan 21 16:27:12 crc kubenswrapper[4902]: I0121 16:27:12.776736 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerStarted","Data":"0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827"} Jan 21 16:27:13 crc kubenswrapper[4902]: I0121 16:27:13.789212 4902 generic.go:334] "Generic (PLEG): container finished" podID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerID="0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827" exitCode=0 Jan 21 16:27:13 crc kubenswrapper[4902]: I0121 16:27:13.789255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827"} Jan 21 16:27:15 crc kubenswrapper[4902]: I0121 16:27:15.816389 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerStarted","Data":"35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10"} Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.044585 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdqrp" podStartSLOduration=4.305061785 podStartE2EDuration="9.044560816s" podCreationTimestamp="2026-01-21 16:27:09 +0000 UTC" firstStartedPulling="2026-01-21 16:27:10.759142892 +0000 UTC m=+6792.835975921" lastFinishedPulling="2026-01-21 16:27:15.498641923 +0000 UTC m=+6797.575474952" observedRunningTime="2026-01-21 16:27:15.841665537 +0000 UTC m=+6797.918498576" watchObservedRunningTime="2026-01-21 16:27:18.044560816 +0000 UTC m=+6800.121393855" Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.046277 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.057211 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-5zjtz"] Jan 21 16:27:18 crc kubenswrapper[4902]: I0121 16:27:18.306784 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a02641-de79-49cd-91a4-d689c669a38c" path="/var/lib/kubelet/pods/b1a02641-de79-49cd-91a4-d689c669a38c/volumes" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.044673 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.045142 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.087789 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:20 crc kubenswrapper[4902]: I0121 16:27:20.928572 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:21 crc kubenswrapper[4902]: I0121 16:27:21.039611 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:22 crc kubenswrapper[4902]: I0121 16:27:22.295089 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:22 crc kubenswrapper[4902]: E0121 16:27:22.295494 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:22 crc kubenswrapper[4902]: I0121 16:27:22.880824 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdqrp" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" containerID="cri-o://35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10" gracePeriod=2 Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.891690 4902 generic.go:334] "Generic (PLEG): container finished" podID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerID="35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10" exitCode=0 Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.891794 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10"} Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.892104 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdqrp" event={"ID":"789280b2-0f33-468a-b0c8-9fe9a3843e3c","Type":"ContainerDied","Data":"d7bcc8cece54e32e70746c7d4793c0eec0b3c6c2bff3c7a2c469217cd9ee806c"} Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.892121 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7bcc8cece54e32e70746c7d4793c0eec0b3c6c2bff3c7a2c469217cd9ee806c" Jan 21 16:27:23 crc kubenswrapper[4902]: I0121 16:27:23.930678 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.057313 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") pod \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.057760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") pod \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.057927 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") pod \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\" (UID: \"789280b2-0f33-468a-b0c8-9fe9a3843e3c\") " Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.058968 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities" (OuterVolumeSpecName: "utilities") pod "789280b2-0f33-468a-b0c8-9fe9a3843e3c" (UID: "789280b2-0f33-468a-b0c8-9fe9a3843e3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.066557 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq" (OuterVolumeSpecName: "kube-api-access-7dzsq") pod "789280b2-0f33-468a-b0c8-9fe9a3843e3c" (UID: "789280b2-0f33-468a-b0c8-9fe9a3843e3c"). InnerVolumeSpecName "kube-api-access-7dzsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.110967 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "789280b2-0f33-468a-b0c8-9fe9a3843e3c" (UID: "789280b2-0f33-468a-b0c8-9fe9a3843e3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.160400 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.160440 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dzsq\" (UniqueName: \"kubernetes.io/projected/789280b2-0f33-468a-b0c8-9fe9a3843e3c-kube-api-access-7dzsq\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.160451 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789280b2-0f33-468a-b0c8-9fe9a3843e3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.899989 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdqrp" Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.923215 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:24 crc kubenswrapper[4902]: I0121 16:27:24.931429 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdqrp"] Jan 21 16:27:26 crc kubenswrapper[4902]: I0121 16:27:26.312164 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" path="/var/lib/kubelet/pods/789280b2-0f33-468a-b0c8-9fe9a3843e3c/volumes" Jan 21 16:27:34 crc kubenswrapper[4902]: I0121 16:27:34.992187 4902 scope.go:117] "RemoveContainer" containerID="745a40ea71fa0659b994aca7c2aff73301bd6c551946f45e224b9ab71b71e18f" Jan 21 16:27:35 crc kubenswrapper[4902]: I0121 16:27:35.028291 4902 scope.go:117] "RemoveContainer" containerID="8e3ea4085f3e9419958669812fbb80d867719697fa5d6f29fd25013487806482" Jan 21 16:27:35 crc kubenswrapper[4902]: I0121 16:27:35.111400 4902 scope.go:117] "RemoveContainer" containerID="12cbd897a8c963b1753af6838fe6f74f721c8f8e6f46ac0835b5c50a96042e89" Jan 21 16:27:36 crc kubenswrapper[4902]: I0121 16:27:36.295461 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:36 crc kubenswrapper[4902]: E0121 16:27:36.297225 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.679368 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:27:44 crc kubenswrapper[4902]: E0121 16:27:44.680537 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-content" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680552 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-content" Jan 21 16:27:44 crc kubenswrapper[4902]: E0121 16:27:44.680593 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680600 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" Jan 21 16:27:44 crc kubenswrapper[4902]: E0121 16:27:44.680610 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-utilities" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680617 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="extract-utilities" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.680818 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="789280b2-0f33-468a-b0c8-9fe9a3843e3c" containerName="registry-server" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.682270 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.694777 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.694842 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.694974 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.701169 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.795930 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796317 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796445 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.796976 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:44 crc kubenswrapper[4902]: I0121 16:27:44.821130 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"redhat-operators-9lrkm\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:45 crc kubenswrapper[4902]: I0121 16:27:45.012492 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:27:45 crc kubenswrapper[4902]: I0121 16:27:45.494844 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:27:46 crc kubenswrapper[4902]: I0121 16:27:46.159716 4902 generic.go:334] "Generic (PLEG): container finished" podID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" exitCode=0 Jan 21 16:27:46 crc kubenswrapper[4902]: I0121 16:27:46.159927 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4"} Jan 21 16:27:46 crc kubenswrapper[4902]: I0121 16:27:46.160025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerStarted","Data":"f2424df31c1a55e9e46726ec4a04a3834f75f121678bf943db69fe8fa9105763"} Jan 21 16:27:48 crc kubenswrapper[4902]: I0121 16:27:48.179777 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerStarted","Data":"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace"} Jan 21 16:27:49 crc kubenswrapper[4902]: I0121 16:27:49.295583 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:27:49 crc kubenswrapper[4902]: E0121 16:27:49.295841 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:27:50 crc kubenswrapper[4902]: I0121 16:27:50.587264 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xpzj8" podUID="6fc6639b-9150-4158-836f-1ffc1c4f5339" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:27:54 crc kubenswrapper[4902]: I0121 16:27:54.232914 4902 generic.go:334] "Generic (PLEG): container finished" podID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" exitCode=0 Jan 21 16:27:54 crc kubenswrapper[4902]: I0121 16:27:54.232993 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace"} Jan 21 16:27:55 crc kubenswrapper[4902]: I0121 16:27:55.244147 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerStarted","Data":"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9"} Jan 21 16:27:55 crc kubenswrapper[4902]: I0121 16:27:55.265217 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9lrkm" podStartSLOduration=2.620986804 podStartE2EDuration="11.265195061s" podCreationTimestamp="2026-01-21 16:27:44 +0000 UTC" firstStartedPulling="2026-01-21 16:27:46.161975235 +0000 UTC m=+6828.238808264" lastFinishedPulling="2026-01-21 16:27:54.806183492 +0000 UTC m=+6836.883016521" observedRunningTime="2026-01-21 16:27:55.264697077 +0000 UTC m=+6837.341530116" watchObservedRunningTime="2026-01-21 16:27:55.265195061 +0000 UTC m=+6837.342028100" Jan 21 16:28:03 crc kubenswrapper[4902]: I0121 16:28:03.295136 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:03 crc kubenswrapper[4902]: E0121 16:28:03.298067 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.012620 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.012674 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.066822 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.392443 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:05 crc kubenswrapper[4902]: I0121 16:28:05.439700 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.366636 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9lrkm" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" containerID="cri-o://928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" gracePeriod=2 Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.824870 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.932512 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") pod \"e9b6c94a-d638-4e6d-8976-17a191b91565\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.932562 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") pod \"e9b6c94a-d638-4e6d-8976-17a191b91565\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.932859 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") pod \"e9b6c94a-d638-4e6d-8976-17a191b91565\" (UID: \"e9b6c94a-d638-4e6d-8976-17a191b91565\") " Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.933565 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities" (OuterVolumeSpecName: "utilities") pod "e9b6c94a-d638-4e6d-8976-17a191b91565" (UID: "e9b6c94a-d638-4e6d-8976-17a191b91565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:07 crc kubenswrapper[4902]: I0121 16:28:07.938747 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks" (OuterVolumeSpecName: "kube-api-access-wphks") pod "e9b6c94a-d638-4e6d-8976-17a191b91565" (UID: "e9b6c94a-d638-4e6d-8976-17a191b91565"). InnerVolumeSpecName "kube-api-access-wphks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.035310 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.035351 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphks\" (UniqueName: \"kubernetes.io/projected/e9b6c94a-d638-4e6d-8976-17a191b91565-kube-api-access-wphks\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.072664 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b6c94a-d638-4e6d-8976-17a191b91565" (UID: "e9b6c94a-d638-4e6d-8976-17a191b91565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.137672 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b6c94a-d638-4e6d-8976-17a191b91565-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377581 4902 generic.go:334] "Generic (PLEG): container finished" podID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" exitCode=0 Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377626 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9"} Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377698 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9lrkm" event={"ID":"e9b6c94a-d638-4e6d-8976-17a191b91565","Type":"ContainerDied","Data":"f2424df31c1a55e9e46726ec4a04a3834f75f121678bf943db69fe8fa9105763"} Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.377728 4902 scope.go:117] "RemoveContainer" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.378871 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9lrkm" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.419677 4902 scope.go:117] "RemoveContainer" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.425151 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.435759 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9lrkm"] Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.466758 4902 scope.go:117] "RemoveContainer" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.505240 4902 scope.go:117] "RemoveContainer" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" Jan 21 16:28:08 crc kubenswrapper[4902]: E0121 16:28:08.506317 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9\": container with ID starting with 928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9 not found: ID does not exist" containerID="928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.506370 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9"} err="failed to get container status \"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9\": rpc error: code = NotFound desc = could not find container \"928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9\": container with ID starting with 928613e25da4d776b5f6505299ff5d441c92f0be4ca228297a21943fca197df9 not found: ID does not exist" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.506406 4902 scope.go:117] "RemoveContainer" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" Jan 21 16:28:08 crc kubenswrapper[4902]: E0121 16:28:08.507487 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace\": container with ID starting with 35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace not found: ID does not exist" containerID="35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.507563 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace"} err="failed to get container status \"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace\": rpc error: code = NotFound desc = could not find container \"35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace\": container with ID starting with 35be7633197a294376dab6ab430dea3231a76827796e3ca5c51115e75db54ace not found: ID does not exist" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.507615 4902 scope.go:117] "RemoveContainer" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" Jan 21 16:28:08 crc kubenswrapper[4902]: E0121 16:28:08.508302 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4\": container with ID starting with 289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4 not found: ID does not exist" containerID="289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4" Jan 21 16:28:08 crc kubenswrapper[4902]: I0121 16:28:08.508352 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4"} err="failed to get container status \"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4\": rpc error: code = NotFound desc = could not find container \"289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4\": container with ID starting with 289f8c8b76e39387321299c200e0a344656044fab965ab40645f1291c75ddcc4 not found: ID does not exist" Jan 21 16:28:10 crc kubenswrapper[4902]: I0121 16:28:10.307234 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" path="/var/lib/kubelet/pods/e9b6c94a-d638-4e6d-8976-17a191b91565/volumes" Jan 21 16:28:15 crc kubenswrapper[4902]: I0121 16:28:15.295375 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:15 crc kubenswrapper[4902]: E0121 16:28:15.296231 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:29 crc kubenswrapper[4902]: I0121 16:28:29.295818 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:29 crc kubenswrapper[4902]: E0121 16:28:29.296632 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:40 crc kubenswrapper[4902]: I0121 16:28:40.295829 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:40 crc kubenswrapper[4902]: E0121 16:28:40.296757 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.851185 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:43 crc kubenswrapper[4902]: E0121 16:28:43.851996 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852012 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" Jan 21 16:28:43 crc kubenswrapper[4902]: E0121 16:28:43.852056 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-utilities" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852068 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-utilities" Jan 21 16:28:43 crc kubenswrapper[4902]: E0121 16:28:43.852113 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-content" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852123 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="extract-content" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.852418 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b6c94a-d638-4e6d-8976-17a191b91565" containerName="registry-server" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.854901 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:43 crc kubenswrapper[4902]: I0121 16:28:43.865016 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.054071 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.054271 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.054307 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.156776 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157457 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.157498 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.177568 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"community-operators-nddsl\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.183575 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:44 crc kubenswrapper[4902]: I0121 16:28:44.756194 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:44 crc kubenswrapper[4902]: W0121 16:28:44.768643 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod859a97e5_04f4_47a3_af07_4546c61e21fc.slice/crio-4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812 WatchSource:0}: Error finding container 4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812: Status 404 returned error can't find the container with id 4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812 Jan 21 16:28:45 crc kubenswrapper[4902]: I0121 16:28:45.778313 4902 generic.go:334] "Generic (PLEG): container finished" podID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" exitCode=0 Jan 21 16:28:45 crc kubenswrapper[4902]: I0121 16:28:45.778432 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5"} Jan 21 16:28:45 crc kubenswrapper[4902]: I0121 16:28:45.778686 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerStarted","Data":"4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812"} Jan 21 16:28:47 crc kubenswrapper[4902]: I0121 16:28:47.808850 4902 generic.go:334] "Generic (PLEG): container finished" podID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" exitCode=0 Jan 21 16:28:47 crc kubenswrapper[4902]: I0121 16:28:47.809281 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf"} Jan 21 16:28:48 crc kubenswrapper[4902]: I0121 16:28:48.824523 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerStarted","Data":"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5"} Jan 21 16:28:48 crc kubenswrapper[4902]: I0121 16:28:48.857213 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nddsl" podStartSLOduration=3.424128021 podStartE2EDuration="5.857186527s" podCreationTimestamp="2026-01-21 16:28:43 +0000 UTC" firstStartedPulling="2026-01-21 16:28:45.780903007 +0000 UTC m=+6887.857736036" lastFinishedPulling="2026-01-21 16:28:48.213961513 +0000 UTC m=+6890.290794542" observedRunningTime="2026-01-21 16:28:48.841972409 +0000 UTC m=+6890.918805438" watchObservedRunningTime="2026-01-21 16:28:48.857186527 +0000 UTC m=+6890.934019556" Jan 21 16:28:51 crc kubenswrapper[4902]: I0121 16:28:51.295520 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:28:51 crc kubenswrapper[4902]: E0121 16:28:51.296516 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.185343 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.185609 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.234465 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:54 crc kubenswrapper[4902]: I0121 16:28:54.937611 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:55 crc kubenswrapper[4902]: I0121 16:28:55.009702 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:56 crc kubenswrapper[4902]: I0121 16:28:56.899252 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nddsl" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" containerID="cri-o://929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" gracePeriod=2 Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.731071 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.870161 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") pod \"859a97e5-04f4-47a3-af07-4546c61e21fc\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.870347 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") pod \"859a97e5-04f4-47a3-af07-4546c61e21fc\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.870542 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") pod \"859a97e5-04f4-47a3-af07-4546c61e21fc\" (UID: \"859a97e5-04f4-47a3-af07-4546c61e21fc\") " Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.871339 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities" (OuterVolumeSpecName: "utilities") pod "859a97e5-04f4-47a3-af07-4546c61e21fc" (UID: "859a97e5-04f4-47a3-af07-4546c61e21fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.877436 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl" (OuterVolumeSpecName: "kube-api-access-7pfpl") pod "859a97e5-04f4-47a3-af07-4546c61e21fc" (UID: "859a97e5-04f4-47a3-af07-4546c61e21fc"). InnerVolumeSpecName "kube-api-access-7pfpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916006 4902 generic.go:334] "Generic (PLEG): container finished" podID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" exitCode=0 Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916097 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5"} Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916140 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nddsl" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916151 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nddsl" event={"ID":"859a97e5-04f4-47a3-af07-4546c61e21fc","Type":"ContainerDied","Data":"4f9eb2131357be0ae15daf60bdcecde4c6eef7573e8a4b52b5979cf63b502812"} Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.916182 4902 scope.go:117] "RemoveContainer" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.937710 4902 scope.go:117] "RemoveContainer" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.941059 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "859a97e5-04f4-47a3-af07-4546c61e21fc" (UID: "859a97e5-04f4-47a3-af07-4546c61e21fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.963394 4902 scope.go:117] "RemoveContainer" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.973934 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.973976 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pfpl\" (UniqueName: \"kubernetes.io/projected/859a97e5-04f4-47a3-af07-4546c61e21fc-kube-api-access-7pfpl\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:57 crc kubenswrapper[4902]: I0121 16:28:57.973992 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/859a97e5-04f4-47a3-af07-4546c61e21fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.020165 4902 scope.go:117] "RemoveContainer" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" Jan 21 16:28:58 crc kubenswrapper[4902]: E0121 16:28:58.021743 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5\": container with ID starting with 929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5 not found: ID does not exist" containerID="929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.021801 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5"} err="failed to get container status \"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5\": rpc error: code = NotFound desc = could not find container \"929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5\": container with ID starting with 929b1f7f97b5b0714c3889d63bd6b7548150f9a1a53b54405f26147f36991af5 not found: ID does not exist" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.021832 4902 scope.go:117] "RemoveContainer" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" Jan 21 16:28:58 crc kubenswrapper[4902]: E0121 16:28:58.022341 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf\": container with ID starting with 8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf not found: ID does not exist" containerID="8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.022391 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf"} err="failed to get container status \"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf\": rpc error: code = NotFound desc = could not find container \"8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf\": container with ID starting with 8d575eeabe9275c2809f916e7d6f0ad522a19ed93ce920bc2e2264427e99dbdf not found: ID does not exist" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.022430 4902 scope.go:117] "RemoveContainer" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" Jan 21 16:28:58 crc kubenswrapper[4902]: E0121 16:28:58.022828 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5\": container with ID starting with 0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5 not found: ID does not exist" containerID="0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.022863 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5"} err="failed to get container status \"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5\": rpc error: code = NotFound desc = could not find container \"0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5\": container with ID starting with 0a17499b0a47b04dd96bb141fa4c802436c9e69df843badf80e1391521a7c7f5 not found: ID does not exist" Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.338563 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:28:58 crc kubenswrapper[4902]: I0121 16:28:58.343980 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nddsl"] Jan 21 16:29:00 crc kubenswrapper[4902]: I0121 16:29:00.319898 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" path="/var/lib/kubelet/pods/859a97e5-04f4-47a3-af07-4546c61e21fc/volumes" Jan 21 16:29:02 crc kubenswrapper[4902]: I0121 16:29:02.297398 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:02 crc kubenswrapper[4902]: E0121 16:29:02.298352 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:13 crc kubenswrapper[4902]: I0121 16:29:13.294826 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:13 crc kubenswrapper[4902]: E0121 16:29:13.296851 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:26 crc kubenswrapper[4902]: I0121 16:29:26.295110 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:26 crc kubenswrapper[4902]: E0121 16:29:26.295821 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:40 crc kubenswrapper[4902]: I0121 16:29:40.296469 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:40 crc kubenswrapper[4902]: E0121 16:29:40.297857 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:29:45 crc kubenswrapper[4902]: I0121 16:29:45.058953 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:29:45 crc kubenswrapper[4902]: I0121 16:29:45.071760 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-fe19-account-create-update-m4ndc"] Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.034685 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.051117 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-v4xqk"] Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.312389 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f9de683-01b0-4513-8e18-d56361ae4bc6" path="/var/lib/kubelet/pods/4f9de683-01b0-4513-8e18-d56361ae4bc6/volumes" Jan 21 16:29:46 crc kubenswrapper[4902]: I0121 16:29:46.314346 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947c6da7-eea1-412b-8f8d-f1cdfadcf4ea" path="/var/lib/kubelet/pods/947c6da7-eea1-412b-8f8d-f1cdfadcf4ea/volumes" Jan 21 16:29:54 crc kubenswrapper[4902]: I0121 16:29:54.294910 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:29:54 crc kubenswrapper[4902]: E0121 16:29:54.295682 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.182058 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 16:30:00 crc kubenswrapper[4902]: E0121 16:30:00.183093 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183109 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4902]: E0121 16:30:00.183128 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-content" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183138 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-content" Jan 21 16:30:00 crc kubenswrapper[4902]: E0121 16:30:00.183163 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-utilities" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183171 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="extract-utilities" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.183526 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="859a97e5-04f4-47a3-af07-4546c61e21fc" containerName="registry-server" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.186247 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.191553 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.192208 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.192627 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.219517 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.219815 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.220118 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.321492 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.321571 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.321616 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.323538 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.328193 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.338442 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"collect-profiles-29483550-vz8jf\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:00 crc kubenswrapper[4902]: I0121 16:30:00.506869 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.014601 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 16:30:01 crc kubenswrapper[4902]: W0121 16:30:01.034865 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8598a357_73ed_4850_bbd3_ce46d3d9a623.slice/crio-4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78 WatchSource:0}: Error finding container 4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78: Status 404 returned error can't find the container with id 4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78 Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.047551 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.063096 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-bvsxp"] Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.540858 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerStarted","Data":"1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2"} Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.540914 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerStarted","Data":"4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78"} Jan 21 16:30:01 crc kubenswrapper[4902]: I0121 16:30:01.563109 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" podStartSLOduration=1.563092874 podStartE2EDuration="1.563092874s" podCreationTimestamp="2026-01-21 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:30:01.554446531 +0000 UTC m=+6963.631279560" watchObservedRunningTime="2026-01-21 16:30:01.563092874 +0000 UTC m=+6963.639925903" Jan 21 16:30:02 crc kubenswrapper[4902]: I0121 16:30:02.308502 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad5c1ce-9471-430a-b273-873699a86d57" path="/var/lib/kubelet/pods/7ad5c1ce-9471-430a-b273-873699a86d57/volumes" Jan 21 16:30:02 crc kubenswrapper[4902]: I0121 16:30:02.551586 4902 generic.go:334] "Generic (PLEG): container finished" podID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerID="1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2" exitCode=0 Jan 21 16:30:02 crc kubenswrapper[4902]: I0121 16:30:02.551648 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerDied","Data":"1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2"} Jan 21 16:30:03 crc kubenswrapper[4902]: I0121 16:30:03.956453 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.012330 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") pod \"8598a357-73ed-4850-bbd3-ce46d3d9a623\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.012603 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") pod \"8598a357-73ed-4850-bbd3-ce46d3d9a623\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.012659 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") pod \"8598a357-73ed-4850-bbd3-ce46d3d9a623\" (UID: \"8598a357-73ed-4850-bbd3-ce46d3d9a623\") " Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.013222 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume" (OuterVolumeSpecName: "config-volume") pod "8598a357-73ed-4850-bbd3-ce46d3d9a623" (UID: "8598a357-73ed-4850-bbd3-ce46d3d9a623"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.013501 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8598a357-73ed-4850-bbd3-ce46d3d9a623-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.018562 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8598a357-73ed-4850-bbd3-ce46d3d9a623" (UID: "8598a357-73ed-4850-bbd3-ce46d3d9a623"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.021299 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx" (OuterVolumeSpecName: "kube-api-access-bbfsx") pod "8598a357-73ed-4850-bbd3-ce46d3d9a623" (UID: "8598a357-73ed-4850-bbd3-ce46d3d9a623"). InnerVolumeSpecName "kube-api-access-bbfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.114717 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbfsx\" (UniqueName: \"kubernetes.io/projected/8598a357-73ed-4850-bbd3-ce46d3d9a623-kube-api-access-bbfsx\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.114758 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8598a357-73ed-4850-bbd3-ce46d3d9a623-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.571804 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" event={"ID":"8598a357-73ed-4850-bbd3-ce46d3d9a623","Type":"ContainerDied","Data":"4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78"} Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.571841 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b18519e5ad76fb7f93e6cc068f4bac59a84ecb67cd9c9339373bf79b7174b78" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.571877 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf" Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.623764 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 16:30:04 crc kubenswrapper[4902]: I0121 16:30:04.631832 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-qjs6m"] Jan 21 16:30:06 crc kubenswrapper[4902]: I0121 16:30:06.298174 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:06 crc kubenswrapper[4902]: E0121 16:30:06.298818 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:06 crc kubenswrapper[4902]: I0121 16:30:06.309223 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6893ec42-9882-4d98-9d44-ab57d7366115" path="/var/lib/kubelet/pods/6893ec42-9882-4d98-9d44-ab57d7366115/volumes" Jan 21 16:30:20 crc kubenswrapper[4902]: I0121 16:30:20.295187 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:20 crc kubenswrapper[4902]: E0121 16:30:20.295921 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:31 crc kubenswrapper[4902]: I0121 16:30:31.294572 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:31 crc kubenswrapper[4902]: E0121 16:30:31.295303 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.307205 4902 scope.go:117] "RemoveContainer" containerID="8ce585bfe7e263f38d6e4b6cf4cca542c267ca3f4df18725b7e9510d21180fb3" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.338626 4902 scope.go:117] "RemoveContainer" containerID="d06aac15e4e0103b43e5e004729564b5803ddb7e6af160a1d792ad3827466cc3" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.384606 4902 scope.go:117] "RemoveContainer" containerID="a17204ae8500af5c3ac489e63a42369874fd6943aaf98b293789e79f2dc7c291" Jan 21 16:30:35 crc kubenswrapper[4902]: I0121 16:30:35.461298 4902 scope.go:117] "RemoveContainer" containerID="04e8685d31a4c1b85ba91615c510f74e4584d6a0993549e22bc5847f14ee429d" Jan 21 16:30:44 crc kubenswrapper[4902]: I0121 16:30:44.295717 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:44 crc kubenswrapper[4902]: E0121 16:30:44.296719 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:30:57 crc kubenswrapper[4902]: I0121 16:30:57.295549 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:30:57 crc kubenswrapper[4902]: E0121 16:30:57.296394 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:31:08 crc kubenswrapper[4902]: I0121 16:31:08.304570 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:31:08 crc kubenswrapper[4902]: E0121 16:31:08.305489 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:31:23 crc kubenswrapper[4902]: I0121 16:31:23.295373 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:31:24 crc kubenswrapper[4902]: I0121 16:31:24.339634 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701"} Jan 21 16:32:56 crc kubenswrapper[4902]: E0121 16:32:56.769253 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18a1d8a3_fcb5_408d_88ab_97d74bad0a8f.slice/crio-conmon-b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:32:57 crc kubenswrapper[4902]: I0121 16:32:57.225477 4902 generic.go:334] "Generic (PLEG): container finished" podID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerID="b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb" exitCode=0 Jan 21 16:32:57 crc kubenswrapper[4902]: I0121 16:32:57.225518 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerDied","Data":"b5588d16688a7ebc8d6fd23427c875175924aa3ba2e94e6335eed27cd3b25dfb"} Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.731620 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881536 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881610 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881847 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.881928 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") pod \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\" (UID: \"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f\") " Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.888287 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd" (OuterVolumeSpecName: "kube-api-access-28wbd") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "kube-api-access-28wbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.892262 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.916834 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory" (OuterVolumeSpecName: "inventory") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.922185 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" (UID: "18a1d8a3-fcb5-408d-88ab-97d74bad0a8f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.984956 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.984991 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28wbd\" (UniqueName: \"kubernetes.io/projected/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-kube-api-access-28wbd\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.985003 4902 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:58 crc kubenswrapper[4902]: I0121 16:32:58.985015 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/18a1d8a3-fcb5-408d-88ab-97d74bad0a8f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:59 crc kubenswrapper[4902]: I0121 16:32:59.259739 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" event={"ID":"18a1d8a3-fcb5-408d-88ab-97d74bad0a8f","Type":"ContainerDied","Data":"d937f1a62ac88d359e95c410ee456b4680107ca512a37ba97d0e11eaf1bd08e7"} Jan 21 16:32:59 crc kubenswrapper[4902]: I0121 16:32:59.259795 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d937f1a62ac88d359e95c410ee456b4680107ca512a37ba97d0e11eaf1bd08e7" Jan 21 16:32:59 crc kubenswrapper[4902]: I0121 16:32:59.259799 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.113242 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-zwtbg"] Jan 21 16:33:06 crc kubenswrapper[4902]: E0121 16:33:06.114376 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114396 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: E0121 16:33:06.114497 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerName="collect-profiles" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114508 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerName="collect-profiles" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114792 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a1d8a3-fcb5-408d-88ab-97d74bad0a8f" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.114838 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" containerName="collect-profiles" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.115923 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.118789 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.119020 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.119277 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.120099 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.130588 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-zwtbg"] Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.156806 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.156874 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.156930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.157001 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258733 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258815 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258865 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.258914 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.264415 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.264559 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.265573 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.276894 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"bootstrap-openstack-openstack-cell1-zwtbg\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.436668 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.995553 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-zwtbg"] Jan 21 16:33:06 crc kubenswrapper[4902]: I0121 16:33:06.996343 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:33:07 crc kubenswrapper[4902]: I0121 16:33:07.338774 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerStarted","Data":"09aa8b83385d63d462672b53c56cdfbd5ebc8b48d5b861d719dd5d15fd038fc7"} Jan 21 16:33:08 crc kubenswrapper[4902]: I0121 16:33:08.350622 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerStarted","Data":"8ba6b111039dcfe25a533eabe26035e6c80ba704480ef20d3bc95434f920bf57"} Jan 21 16:33:08 crc kubenswrapper[4902]: I0121 16:33:08.379034 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" podStartSLOduration=1.348833035 podStartE2EDuration="2.379005334s" podCreationTimestamp="2026-01-21 16:33:06 +0000 UTC" firstStartedPulling="2026-01-21 16:33:06.996090584 +0000 UTC m=+7149.072923613" lastFinishedPulling="2026-01-21 16:33:08.026262883 +0000 UTC m=+7150.103095912" observedRunningTime="2026-01-21 16:33:08.368554078 +0000 UTC m=+7150.445387117" watchObservedRunningTime="2026-01-21 16:33:08.379005334 +0000 UTC m=+7150.455838363" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.621245 4902 scope.go:117] "RemoveContainer" containerID="b373e6919ca764e66afc03ed68fe6af0501058a2ad9ef7fa08c0b3af4ce3215b" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.664212 4902 scope.go:117] "RemoveContainer" containerID="0b5fb853e79c68c6241f67d5b7bbcb7d13dc083797c50a00f83b6ef27ef4b827" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.685467 4902 scope.go:117] "RemoveContainer" containerID="35f73d651eeaa6573d9033ccbf674b8ce47b749239de3eb8f9420a462171ab10" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.732097 4902 scope.go:117] "RemoveContainer" containerID="32cdb44674e6374f766b65eaed6a61b60758360dd1e8e594ab7a3baf4d914d87" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.753443 4902 scope.go:117] "RemoveContainer" containerID="ecc4d1b7ad6d3c3e3e91d4bd9e4657053e105bd206863129b0c9caecb3844760" Jan 21 16:33:35 crc kubenswrapper[4902]: I0121 16:33:35.810164 4902 scope.go:117] "RemoveContainer" containerID="04aced0c0b567c17119cd21528fe883b24627e7fda15f96134eacb5302158c50" Jan 21 16:33:47 crc kubenswrapper[4902]: I0121 16:33:47.769521 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:33:47 crc kubenswrapper[4902]: I0121 16:33:47.770138 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:17 crc kubenswrapper[4902]: I0121 16:34:17.769696 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:17 crc kubenswrapper[4902]: I0121 16:34:17.772541 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.770475 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.771188 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.771242 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.772219 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:34:47 crc kubenswrapper[4902]: I0121 16:34:47.772291 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701" gracePeriod=600 Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.295725 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701" exitCode=0 Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.306841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701"} Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.306895 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d"} Jan 21 16:34:48 crc kubenswrapper[4902]: I0121 16:34:48.306918 4902 scope.go:117] "RemoveContainer" containerID="608539e4f99cb1709fb4390c0c8b805ebe997377b901c7d95f3aae08deaeffd8" Jan 21 16:36:21 crc kubenswrapper[4902]: I0121 16:36:21.213712 4902 generic.go:334] "Generic (PLEG): container finished" podID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerID="8ba6b111039dcfe25a533eabe26035e6c80ba704480ef20d3bc95434f920bf57" exitCode=0 Jan 21 16:36:21 crc kubenswrapper[4902]: I0121 16:36:21.213829 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerDied","Data":"8ba6b111039dcfe25a533eabe26035e6c80ba704480ef20d3bc95434f920bf57"} Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.669505 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840787 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840861 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840941 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.840972 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") pod \"03ebbaac-5961-4e6e-8709-93bb85975c9c\" (UID: \"03ebbaac-5961-4e6e-8709-93bb85975c9c\") " Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.847319 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.847383 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj" (OuterVolumeSpecName: "kube-api-access-fvswj") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "kube-api-access-fvswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.877780 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.878281 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory" (OuterVolumeSpecName: "inventory") pod "03ebbaac-5961-4e6e-8709-93bb85975c9c" (UID: "03ebbaac-5961-4e6e-8709-93bb85975c9c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943927 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvswj\" (UniqueName: \"kubernetes.io/projected/03ebbaac-5961-4e6e-8709-93bb85975c9c-kube-api-access-fvswj\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943962 4902 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943971 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:22 crc kubenswrapper[4902]: I0121 16:36:22.943983 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ebbaac-5961-4e6e-8709-93bb85975c9c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.240313 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" event={"ID":"03ebbaac-5961-4e6e-8709-93bb85975c9c","Type":"ContainerDied","Data":"09aa8b83385d63d462672b53c56cdfbd5ebc8b48d5b861d719dd5d15fd038fc7"} Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.240366 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09aa8b83385d63d462672b53c56cdfbd5ebc8b48d5b861d719dd5d15fd038fc7" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.240483 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-zwtbg" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.337232 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-lvw72"] Jan 21 16:36:23 crc kubenswrapper[4902]: E0121 16:36:23.338105 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.338127 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.338401 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ebbaac-5961-4e6e-8709-93bb85975c9c" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.339273 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.343700 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.344475 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.344994 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.346701 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.355982 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-lvw72"] Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.489269 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.489378 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.489515 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.591548 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.591789 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.592004 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.599701 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.604515 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.609952 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"download-cache-openstack-openstack-cell1-lvw72\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:23 crc kubenswrapper[4902]: I0121 16:36:23.660563 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:36:24 crc kubenswrapper[4902]: I0121 16:36:24.242156 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-lvw72"] Jan 21 16:36:25 crc kubenswrapper[4902]: I0121 16:36:25.383203 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerStarted","Data":"7b3fb5c8e9391f6b9622a0cf5505767f0407f225f074edaa215ecff368c0b7eb"} Jan 21 16:36:25 crc kubenswrapper[4902]: I0121 16:36:25.383627 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerStarted","Data":"48dbb757376e3a511ff5530e99c8c98362a3ac0c8c52ba32a7a1bbd83c254216"} Jan 21 16:36:25 crc kubenswrapper[4902]: I0121 16:36:25.403835 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" podStartSLOduration=1.833955939 podStartE2EDuration="2.403818873s" podCreationTimestamp="2026-01-21 16:36:23 +0000 UTC" firstStartedPulling="2026-01-21 16:36:24.244301934 +0000 UTC m=+7346.321134963" lastFinishedPulling="2026-01-21 16:36:24.814164858 +0000 UTC m=+7346.890997897" observedRunningTime="2026-01-21 16:36:25.401185018 +0000 UTC m=+7347.478018047" watchObservedRunningTime="2026-01-21 16:36:25.403818873 +0000 UTC m=+7347.480651902" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.489424 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.492746 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.500101 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.638039 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.638174 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.638263 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740230 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740361 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740876 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.740912 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.761915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"redhat-marketplace-d6z22\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:57 crc kubenswrapper[4902]: I0121 16:36:57.829633 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.326399 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.747234 4902 generic.go:334] "Generic (PLEG): container finished" podID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" exitCode=0 Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.747359 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee"} Jan 21 16:36:58 crc kubenswrapper[4902]: I0121 16:36:58.747583 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerStarted","Data":"10ec2ea5668a87f8e5065a6bf22b0becd71a72a8a05e59574aa952a1a8d3d6b1"} Jan 21 16:37:00 crc kubenswrapper[4902]: I0121 16:37:00.768424 4902 generic.go:334] "Generic (PLEG): container finished" podID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" exitCode=0 Jan 21 16:37:00 crc kubenswrapper[4902]: I0121 16:37:00.768485 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae"} Jan 21 16:37:01 crc kubenswrapper[4902]: I0121 16:37:01.781533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerStarted","Data":"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144"} Jan 21 16:37:01 crc kubenswrapper[4902]: I0121 16:37:01.820576 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6z22" podStartSLOduration=2.313476762 podStartE2EDuration="4.820555932s" podCreationTimestamp="2026-01-21 16:36:57 +0000 UTC" firstStartedPulling="2026-01-21 16:36:58.749930095 +0000 UTC m=+7380.826763124" lastFinishedPulling="2026-01-21 16:37:01.257009255 +0000 UTC m=+7383.333842294" observedRunningTime="2026-01-21 16:37:01.812069181 +0000 UTC m=+7383.888902230" watchObservedRunningTime="2026-01-21 16:37:01.820555932 +0000 UTC m=+7383.897388961" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.830481 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.831016 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.880735 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:07 crc kubenswrapper[4902]: I0121 16:37:07.942648 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:08 crc kubenswrapper[4902]: I0121 16:37:08.126485 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:37:09 crc kubenswrapper[4902]: I0121 16:37:09.873319 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6z22" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" containerID="cri-o://18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" gracePeriod=2 Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.336200 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.471961 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") pod \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.472494 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") pod \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.472589 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") pod \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\" (UID: \"bd62113d-9826-4317-8ad0-b2f1d06c81c0\") " Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.475057 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities" (OuterVolumeSpecName: "utilities") pod "bd62113d-9826-4317-8ad0-b2f1d06c81c0" (UID: "bd62113d-9826-4317-8ad0-b2f1d06c81c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.478770 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff" (OuterVolumeSpecName: "kube-api-access-l2sff") pod "bd62113d-9826-4317-8ad0-b2f1d06c81c0" (UID: "bd62113d-9826-4317-8ad0-b2f1d06c81c0"). InnerVolumeSpecName "kube-api-access-l2sff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.499256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd62113d-9826-4317-8ad0-b2f1d06c81c0" (UID: "bd62113d-9826-4317-8ad0-b2f1d06c81c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.575763 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.575829 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2sff\" (UniqueName: \"kubernetes.io/projected/bd62113d-9826-4317-8ad0-b2f1d06c81c0-kube-api-access-l2sff\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.575843 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd62113d-9826-4317-8ad0-b2f1d06c81c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.896966 4902 generic.go:334] "Generic (PLEG): container finished" podID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" exitCode=0 Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897015 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144"} Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897115 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6z22" event={"ID":"bd62113d-9826-4317-8ad0-b2f1d06c81c0","Type":"ContainerDied","Data":"10ec2ea5668a87f8e5065a6bf22b0becd71a72a8a05e59574aa952a1a8d3d6b1"} Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897138 4902 scope.go:117] "RemoveContainer" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.897177 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6z22" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.919677 4902 scope.go:117] "RemoveContainer" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.940112 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.951073 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6z22"] Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.956821 4902 scope.go:117] "RemoveContainer" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.987297 4902 scope.go:117] "RemoveContainer" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" Jan 21 16:37:10 crc kubenswrapper[4902]: E0121 16:37:10.987873 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144\": container with ID starting with 18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144 not found: ID does not exist" containerID="18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.987904 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144"} err="failed to get container status \"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144\": rpc error: code = NotFound desc = could not find container \"18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144\": container with ID starting with 18107e904f508c5ee09b19121cb03eba0b4a1099967641fa739ca8a7c00f5144 not found: ID does not exist" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.987925 4902 scope.go:117] "RemoveContainer" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" Jan 21 16:37:10 crc kubenswrapper[4902]: E0121 16:37:10.988294 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae\": container with ID starting with b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae not found: ID does not exist" containerID="b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.988336 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae"} err="failed to get container status \"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae\": rpc error: code = NotFound desc = could not find container \"b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae\": container with ID starting with b84ffdd8dc29fc15798528239989ebeecfbcdf66fdbe010632913ceb584678ae not found: ID does not exist" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.988364 4902 scope.go:117] "RemoveContainer" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" Jan 21 16:37:10 crc kubenswrapper[4902]: E0121 16:37:10.988788 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee\": container with ID starting with 04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee not found: ID does not exist" containerID="04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee" Jan 21 16:37:10 crc kubenswrapper[4902]: I0121 16:37:10.988863 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee"} err="failed to get container status \"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee\": rpc error: code = NotFound desc = could not find container \"04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee\": container with ID starting with 04c9dfbda7546710be97e89fde0e4f2e24be0ef900c3e5eeabcf76b37543d0ee not found: ID does not exist" Jan 21 16:37:12 crc kubenswrapper[4902]: I0121 16:37:12.315301 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" path="/var/lib/kubelet/pods/bd62113d-9826-4317-8ad0-b2f1d06c81c0/volumes" Jan 21 16:37:17 crc kubenswrapper[4902]: I0121 16:37:17.769891 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:37:17 crc kubenswrapper[4902]: I0121 16:37:17.775259 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:37:47 crc kubenswrapper[4902]: I0121 16:37:47.771869 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:37:47 crc kubenswrapper[4902]: I0121 16:37:47.772472 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:37:57 crc kubenswrapper[4902]: I0121 16:37:57.321378 4902 generic.go:334] "Generic (PLEG): container finished" podID="d171dc59-1575-4895-b80f-0886e901b704" containerID="7b3fb5c8e9391f6b9622a0cf5505767f0407f225f074edaa215ecff368c0b7eb" exitCode=0 Jan 21 16:37:57 crc kubenswrapper[4902]: I0121 16:37:57.321561 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerDied","Data":"7b3fb5c8e9391f6b9622a0cf5505767f0407f225f074edaa215ecff368c0b7eb"} Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.777262 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.816093 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") pod \"d171dc59-1575-4895-b80f-0886e901b704\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.816207 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") pod \"d171dc59-1575-4895-b80f-0886e901b704\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.816287 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") pod \"d171dc59-1575-4895-b80f-0886e901b704\" (UID: \"d171dc59-1575-4895-b80f-0886e901b704\") " Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.824559 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4" (OuterVolumeSpecName: "kube-api-access-p6ns4") pod "d171dc59-1575-4895-b80f-0886e901b704" (UID: "d171dc59-1575-4895-b80f-0886e901b704"). InnerVolumeSpecName "kube-api-access-p6ns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.846858 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory" (OuterVolumeSpecName: "inventory") pod "d171dc59-1575-4895-b80f-0886e901b704" (UID: "d171dc59-1575-4895-b80f-0886e901b704"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.847069 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d171dc59-1575-4895-b80f-0886e901b704" (UID: "d171dc59-1575-4895-b80f-0886e901b704"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.918323 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.918356 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d171dc59-1575-4895-b80f-0886e901b704-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:58 crc kubenswrapper[4902]: I0121 16:37:58.918365 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6ns4\" (UniqueName: \"kubernetes.io/projected/d171dc59-1575-4895-b80f-0886e901b704-kube-api-access-p6ns4\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.344488 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" event={"ID":"d171dc59-1575-4895-b80f-0886e901b704","Type":"ContainerDied","Data":"48dbb757376e3a511ff5530e99c8c98362a3ac0c8c52ba32a7a1bbd83c254216"} Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.344533 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48dbb757376e3a511ff5530e99c8c98362a3ac0c8c52ba32a7a1bbd83c254216" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.344594 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-lvw72" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.436616 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jgd86"] Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437010 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437026 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437037 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d171dc59-1575-4895-b80f-0886e901b704" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437055 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="d171dc59-1575-4895-b80f-0886e901b704" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437075 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-utilities" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437082 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-utilities" Jan 21 16:37:59 crc kubenswrapper[4902]: E0121 16:37:59.437104 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-content" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437109 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="extract-content" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437290 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62113d-9826-4317-8ad0-b2f1d06c81c0" containerName="registry-server" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.437342 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="d171dc59-1575-4895-b80f-0886e901b704" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.438035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441778 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441780 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441844 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.441795 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.448325 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jgd86"] Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.532772 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.532878 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.532945 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.636280 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.636449 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.636490 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.647756 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.650642 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.673820 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"configure-network-openstack-openstack-cell1-jgd86\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:37:59 crc kubenswrapper[4902]: I0121 16:37:59.756783 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:38:00 crc kubenswrapper[4902]: I0121 16:38:00.356775 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jgd86"] Jan 21 16:38:01 crc kubenswrapper[4902]: I0121 16:38:01.365021 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerStarted","Data":"69f60bb136372fb2378f342b345859b169e746d2f6f9374d0fb348efe83cb1b2"} Jan 21 16:38:01 crc kubenswrapper[4902]: I0121 16:38:01.365364 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerStarted","Data":"c302a71944841262a867537b73d171a54730ef06ab00ad3abc0cf5946248e3eb"} Jan 21 16:38:01 crc kubenswrapper[4902]: I0121 16:38:01.391616 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" podStartSLOduration=1.952529207 podStartE2EDuration="2.391594201s" podCreationTimestamp="2026-01-21 16:37:59 +0000 UTC" firstStartedPulling="2026-01-21 16:38:00.36736182 +0000 UTC m=+7442.444194849" lastFinishedPulling="2026-01-21 16:38:00.806426794 +0000 UTC m=+7442.883259843" observedRunningTime="2026-01-21 16:38:01.387249568 +0000 UTC m=+7443.464082607" watchObservedRunningTime="2026-01-21 16:38:01.391594201 +0000 UTC m=+7443.468427240" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.769935 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.770472 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.770521 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.771446 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:38:17 crc kubenswrapper[4902]: I0121 16:38:17.771511 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" gracePeriod=600 Jan 21 16:38:17 crc kubenswrapper[4902]: E0121 16:38:17.896349 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.527345 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" exitCode=0 Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.527782 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d"} Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.527847 4902 scope.go:117] "RemoveContainer" containerID="8b5417485127dae8a96d83b6c2fcfd1f6e929b87a550a052576012cdd78d3701" Jan 21 16:38:18 crc kubenswrapper[4902]: I0121 16:38:18.528737 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:18 crc kubenswrapper[4902]: E0121 16:38:18.529121 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:33 crc kubenswrapper[4902]: I0121 16:38:33.295798 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:33 crc kubenswrapper[4902]: E0121 16:38:33.296557 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:46 crc kubenswrapper[4902]: I0121 16:38:46.296254 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:46 crc kubenswrapper[4902]: E0121 16:38:46.297722 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:38:58 crc kubenswrapper[4902]: I0121 16:38:58.303299 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:38:58 crc kubenswrapper[4902]: E0121 16:38:58.304153 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:13 crc kubenswrapper[4902]: I0121 16:39:13.295034 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:13 crc kubenswrapper[4902]: E0121 16:39:13.296166 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:23 crc kubenswrapper[4902]: I0121 16:39:23.180426 4902 generic.go:334] "Generic (PLEG): container finished" podID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerID="69f60bb136372fb2378f342b345859b169e746d2f6f9374d0fb348efe83cb1b2" exitCode=0 Jan 21 16:39:23 crc kubenswrapper[4902]: I0121 16:39:23.180519 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerDied","Data":"69f60bb136372fb2378f342b345859b169e746d2f6f9374d0fb348efe83cb1b2"} Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.796134 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.981980 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") pod \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.982449 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") pod \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.982600 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") pod \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\" (UID: \"2418bfc5-bf9b-4397-bc7f-20aa86aa582a\") " Jan 21 16:39:24 crc kubenswrapper[4902]: I0121 16:39:24.990945 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq" (OuterVolumeSpecName: "kube-api-access-jpjvq") pod "2418bfc5-bf9b-4397-bc7f-20aa86aa582a" (UID: "2418bfc5-bf9b-4397-bc7f-20aa86aa582a"). InnerVolumeSpecName "kube-api-access-jpjvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.012700 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory" (OuterVolumeSpecName: "inventory") pod "2418bfc5-bf9b-4397-bc7f-20aa86aa582a" (UID: "2418bfc5-bf9b-4397-bc7f-20aa86aa582a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.016491 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2418bfc5-bf9b-4397-bc7f-20aa86aa582a" (UID: "2418bfc5-bf9b-4397-bc7f-20aa86aa582a"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.085335 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpjvq\" (UniqueName: \"kubernetes.io/projected/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-kube-api-access-jpjvq\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.087224 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.087379 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2418bfc5-bf9b-4397-bc7f-20aa86aa582a-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.382730 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" event={"ID":"2418bfc5-bf9b-4397-bc7f-20aa86aa582a","Type":"ContainerDied","Data":"c302a71944841262a867537b73d171a54730ef06ab00ad3abc0cf5946248e3eb"} Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.382776 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c302a71944841262a867537b73d171a54730ef06ab00ad3abc0cf5946248e3eb" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.382828 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jgd86" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.397494 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c9t8"] Jan 21 16:39:25 crc kubenswrapper[4902]: E0121 16:39:25.400361 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.400384 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.401985 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2418bfc5-bf9b-4397-bc7f-20aa86aa582a" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.430914 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c9t8"] Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.431102 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.437717 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.438243 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.438265 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.438417 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.510292 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.510538 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.510873 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.612523 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.612589 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.612714 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.618305 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.626623 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.631891 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"validate-network-openstack-openstack-cell1-5c9t8\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:25 crc kubenswrapper[4902]: I0121 16:39:25.760973 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:26 crc kubenswrapper[4902]: W0121 16:39:26.323270 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffce6892_25f4_48d1_b314_24d784fbc43f.slice/crio-20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e WatchSource:0}: Error finding container 20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e: Status 404 returned error can't find the container with id 20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e Jan 21 16:39:26 crc kubenswrapper[4902]: I0121 16:39:26.324596 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c9t8"] Jan 21 16:39:26 crc kubenswrapper[4902]: I0121 16:39:26.327018 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:39:26 crc kubenswrapper[4902]: I0121 16:39:26.395512 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerStarted","Data":"20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e"} Jan 21 16:39:27 crc kubenswrapper[4902]: I0121 16:39:27.413667 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerStarted","Data":"2debda954ce460ba3ebdc1bc42e8959780a79f976e8a7784022fb6fa887a3fd5"} Jan 21 16:39:28 crc kubenswrapper[4902]: I0121 16:39:28.302619 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:28 crc kubenswrapper[4902]: E0121 16:39:28.303199 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:32 crc kubenswrapper[4902]: I0121 16:39:32.468913 4902 generic.go:334] "Generic (PLEG): container finished" podID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerID="2debda954ce460ba3ebdc1bc42e8959780a79f976e8a7784022fb6fa887a3fd5" exitCode=0 Jan 21 16:39:32 crc kubenswrapper[4902]: I0121 16:39:32.469031 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerDied","Data":"2debda954ce460ba3ebdc1bc42e8959780a79f976e8a7784022fb6fa887a3fd5"} Jan 21 16:39:33 crc kubenswrapper[4902]: I0121 16:39:33.995994 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.098716 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") pod \"ffce6892-25f4-48d1-b314-24d784fbc43f\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.098798 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") pod \"ffce6892-25f4-48d1-b314-24d784fbc43f\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.098850 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") pod \"ffce6892-25f4-48d1-b314-24d784fbc43f\" (UID: \"ffce6892-25f4-48d1-b314-24d784fbc43f\") " Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.114244 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx" (OuterVolumeSpecName: "kube-api-access-vghhx") pod "ffce6892-25f4-48d1-b314-24d784fbc43f" (UID: "ffce6892-25f4-48d1-b314-24d784fbc43f"). InnerVolumeSpecName "kube-api-access-vghhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.133771 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory" (OuterVolumeSpecName: "inventory") pod "ffce6892-25f4-48d1-b314-24d784fbc43f" (UID: "ffce6892-25f4-48d1-b314-24d784fbc43f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.140411 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ffce6892-25f4-48d1-b314-24d784fbc43f" (UID: "ffce6892-25f4-48d1-b314-24d784fbc43f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.201974 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.202017 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ffce6892-25f4-48d1-b314-24d784fbc43f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.202032 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vghhx\" (UniqueName: \"kubernetes.io/projected/ffce6892-25f4-48d1-b314-24d784fbc43f-kube-api-access-vghhx\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.499152 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" event={"ID":"ffce6892-25f4-48d1-b314-24d784fbc43f","Type":"ContainerDied","Data":"20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e"} Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.499483 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e348370114a2e50aab14d772457f260853eab9ef63a14d38be5fe459f2ea9e" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.499548 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c9t8" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.587264 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-7xpxk"] Jan 21 16:39:34 crc kubenswrapper[4902]: E0121 16:39:34.587841 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.587866 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.588172 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffce6892-25f4-48d1-b314-24d784fbc43f" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.589156 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.592971 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.593058 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.593118 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.593260 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.601114 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-7xpxk"] Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.723307 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.723368 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.723489 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.826386 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.826435 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.826462 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.830776 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.830859 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.847665 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"install-os-openstack-openstack-cell1-7xpxk\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:34 crc kubenswrapper[4902]: I0121 16:39:34.923670 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:39:35 crc kubenswrapper[4902]: I0121 16:39:35.502517 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-7xpxk"] Jan 21 16:39:36 crc kubenswrapper[4902]: I0121 16:39:36.522312 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerStarted","Data":"7f14e700c3c08bd2436965f63df6596d4264b1913725352a693d9211f6ae13f3"} Jan 21 16:39:36 crc kubenswrapper[4902]: I0121 16:39:36.522868 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerStarted","Data":"d0c3082fb7a35a7b9b6397aac0ac9cface842fa578167f4301df76aea1c35137"} Jan 21 16:39:36 crc kubenswrapper[4902]: I0121 16:39:36.547491 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" podStartSLOduration=2.054464694 podStartE2EDuration="2.547473474s" podCreationTimestamp="2026-01-21 16:39:34 +0000 UTC" firstStartedPulling="2026-01-21 16:39:35.525415004 +0000 UTC m=+7537.602248043" lastFinishedPulling="2026-01-21 16:39:36.018423794 +0000 UTC m=+7538.095256823" observedRunningTime="2026-01-21 16:39:36.546379443 +0000 UTC m=+7538.623212472" watchObservedRunningTime="2026-01-21 16:39:36.547473474 +0000 UTC m=+7538.624306503" Jan 21 16:39:40 crc kubenswrapper[4902]: I0121 16:39:40.295201 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:40 crc kubenswrapper[4902]: E0121 16:39:40.296116 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.731133 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.733472 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.745459 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.854256 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.854362 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.854598 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.956950 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.957035 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.957239 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.957954 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.958005 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:43 crc kubenswrapper[4902]: I0121 16:39:43.980717 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"community-operators-8lxc7\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:44 crc kubenswrapper[4902]: I0121 16:39:44.062311 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:44 crc kubenswrapper[4902]: I0121 16:39:44.664988 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:45 crc kubenswrapper[4902]: I0121 16:39:45.610901 4902 generic.go:334] "Generic (PLEG): container finished" podID="086170f4-76bd-43a5-861d-eca144befed6" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" exitCode=0 Jan 21 16:39:45 crc kubenswrapper[4902]: I0121 16:39:45.610944 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0"} Jan 21 16:39:45 crc kubenswrapper[4902]: I0121 16:39:45.611262 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerStarted","Data":"5ab74e5da033ad0b6c3336ad93b57452ac02493806ddcb5b7c603ed687ba6556"} Jan 21 16:39:47 crc kubenswrapper[4902]: I0121 16:39:47.631878 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerStarted","Data":"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875"} Jan 21 16:39:49 crc kubenswrapper[4902]: I0121 16:39:49.652314 4902 generic.go:334] "Generic (PLEG): container finished" podID="086170f4-76bd-43a5-861d-eca144befed6" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" exitCode=0 Jan 21 16:39:49 crc kubenswrapper[4902]: I0121 16:39:49.652407 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875"} Jan 21 16:39:50 crc kubenswrapper[4902]: I0121 16:39:50.676904 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerStarted","Data":"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804"} Jan 21 16:39:50 crc kubenswrapper[4902]: I0121 16:39:50.707958 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8lxc7" podStartSLOduration=3.244496712 podStartE2EDuration="7.707935989s" podCreationTimestamp="2026-01-21 16:39:43 +0000 UTC" firstStartedPulling="2026-01-21 16:39:45.612933891 +0000 UTC m=+7547.689766930" lastFinishedPulling="2026-01-21 16:39:50.076373168 +0000 UTC m=+7552.153206207" observedRunningTime="2026-01-21 16:39:50.701910998 +0000 UTC m=+7552.778744037" watchObservedRunningTime="2026-01-21 16:39:50.707935989 +0000 UTC m=+7552.784769018" Jan 21 16:39:52 crc kubenswrapper[4902]: I0121 16:39:52.294864 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:39:52 crc kubenswrapper[4902]: E0121 16:39:52.295470 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:39:54 crc kubenswrapper[4902]: I0121 16:39:54.062863 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:54 crc kubenswrapper[4902]: I0121 16:39:54.064272 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:54 crc kubenswrapper[4902]: I0121 16:39:54.114942 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:55 crc kubenswrapper[4902]: I0121 16:39:55.766384 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:55 crc kubenswrapper[4902]: I0121 16:39:55.810769 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:57 crc kubenswrapper[4902]: I0121 16:39:57.735079 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8lxc7" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" containerID="cri-o://c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" gracePeriod=2 Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.215222 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.282856 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") pod \"086170f4-76bd-43a5-861d-eca144befed6\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.282900 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") pod \"086170f4-76bd-43a5-861d-eca144befed6\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.283073 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") pod \"086170f4-76bd-43a5-861d-eca144befed6\" (UID: \"086170f4-76bd-43a5-861d-eca144befed6\") " Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.283790 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities" (OuterVolumeSpecName: "utilities") pod "086170f4-76bd-43a5-861d-eca144befed6" (UID: "086170f4-76bd-43a5-861d-eca144befed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.290789 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6" (OuterVolumeSpecName: "kube-api-access-2cwd6") pod "086170f4-76bd-43a5-861d-eca144befed6" (UID: "086170f4-76bd-43a5-861d-eca144befed6"). InnerVolumeSpecName "kube-api-access-2cwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.350279 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "086170f4-76bd-43a5-861d-eca144befed6" (UID: "086170f4-76bd-43a5-861d-eca144befed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.385994 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.386273 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/086170f4-76bd-43a5-861d-eca144befed6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.386397 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cwd6\" (UniqueName: \"kubernetes.io/projected/086170f4-76bd-43a5-861d-eca144befed6-kube-api-access-2cwd6\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745669 4902 generic.go:334] "Generic (PLEG): container finished" podID="086170f4-76bd-43a5-861d-eca144befed6" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" exitCode=0 Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745725 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804"} Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745992 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lxc7" event={"ID":"086170f4-76bd-43a5-861d-eca144befed6","Type":"ContainerDied","Data":"5ab74e5da033ad0b6c3336ad93b57452ac02493806ddcb5b7c603ed687ba6556"} Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.746014 4902 scope.go:117] "RemoveContainer" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.745743 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lxc7" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.777322 4902 scope.go:117] "RemoveContainer" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.806492 4902 scope.go:117] "RemoveContainer" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.815249 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.821702 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8lxc7"] Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.859320 4902 scope.go:117] "RemoveContainer" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" Jan 21 16:39:58 crc kubenswrapper[4902]: E0121 16:39:58.859847 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804\": container with ID starting with c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804 not found: ID does not exist" containerID="c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.859889 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804"} err="failed to get container status \"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804\": rpc error: code = NotFound desc = could not find container \"c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804\": container with ID starting with c9b0e569de65a92362619b0233ad1ff921f20ef1c3e0da6bd46f4f89c65d8804 not found: ID does not exist" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.859918 4902 scope.go:117] "RemoveContainer" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" Jan 21 16:39:58 crc kubenswrapper[4902]: E0121 16:39:58.860320 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875\": container with ID starting with 1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875 not found: ID does not exist" containerID="1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.860366 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875"} err="failed to get container status \"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875\": rpc error: code = NotFound desc = could not find container \"1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875\": container with ID starting with 1e42bc3c139adb6e36f7a8865f03b6d6c2d57497403db07bf73d38c701302875 not found: ID does not exist" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.860400 4902 scope.go:117] "RemoveContainer" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" Jan 21 16:39:58 crc kubenswrapper[4902]: E0121 16:39:58.860751 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0\": container with ID starting with b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0 not found: ID does not exist" containerID="b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0" Jan 21 16:39:58 crc kubenswrapper[4902]: I0121 16:39:58.860801 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0"} err="failed to get container status \"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0\": rpc error: code = NotFound desc = could not find container \"b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0\": container with ID starting with b58be9c3a498fa9a6988df6acdb6dbd906eaa0376aaa75925597e04f83d011f0 not found: ID does not exist" Jan 21 16:40:00 crc kubenswrapper[4902]: I0121 16:40:00.312946 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086170f4-76bd-43a5-861d-eca144befed6" path="/var/lib/kubelet/pods/086170f4-76bd-43a5-861d-eca144befed6/volumes" Jan 21 16:40:05 crc kubenswrapper[4902]: I0121 16:40:05.295646 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:05 crc kubenswrapper[4902]: E0121 16:40:05.297009 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:18 crc kubenswrapper[4902]: I0121 16:40:18.306402 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:18 crc kubenswrapper[4902]: E0121 16:40:18.307282 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:23 crc kubenswrapper[4902]: I0121 16:40:23.049815 4902 generic.go:334] "Generic (PLEG): container finished" podID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerID="7f14e700c3c08bd2436965f63df6596d4264b1913725352a693d9211f6ae13f3" exitCode=0 Jan 21 16:40:23 crc kubenswrapper[4902]: I0121 16:40:23.050425 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerDied","Data":"7f14e700c3c08bd2436965f63df6596d4264b1913725352a693d9211f6ae13f3"} Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.598066 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.745102 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") pod \"e253be6c-dccb-456f-b4ca-0aed1b901c43\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.745724 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") pod \"e253be6c-dccb-456f-b4ca-0aed1b901c43\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.745884 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") pod \"e253be6c-dccb-456f-b4ca-0aed1b901c43\" (UID: \"e253be6c-dccb-456f-b4ca-0aed1b901c43\") " Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.753947 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld" (OuterVolumeSpecName: "kube-api-access-kd7ld") pod "e253be6c-dccb-456f-b4ca-0aed1b901c43" (UID: "e253be6c-dccb-456f-b4ca-0aed1b901c43"). InnerVolumeSpecName "kube-api-access-kd7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.785529 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory" (OuterVolumeSpecName: "inventory") pod "e253be6c-dccb-456f-b4ca-0aed1b901c43" (UID: "e253be6c-dccb-456f-b4ca-0aed1b901c43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.788406 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e253be6c-dccb-456f-b4ca-0aed1b901c43" (UID: "e253be6c-dccb-456f-b4ca-0aed1b901c43"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.849417 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.849454 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7ld\" (UniqueName: \"kubernetes.io/projected/e253be6c-dccb-456f-b4ca-0aed1b901c43-kube-api-access-kd7ld\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:24 crc kubenswrapper[4902]: I0121 16:40:24.849469 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e253be6c-dccb-456f-b4ca-0aed1b901c43-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.075782 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.075768 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-7xpxk" event={"ID":"e253be6c-dccb-456f-b4ca-0aed1b901c43","Type":"ContainerDied","Data":"d0c3082fb7a35a7b9b6397aac0ac9cface842fa578167f4301df76aea1c35137"} Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.075978 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c3082fb7a35a7b9b6397aac0ac9cface842fa578167f4301df76aea1c35137" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.169433 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-w46l6"] Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.169932 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-content" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.169950 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-content" Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.169981 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-utilities" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.169988 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="extract-utilities" Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.170001 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerName="install-os-openstack-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170018 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerName="install-os-openstack-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: E0121 16:40:25.170027 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170033 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170254 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e253be6c-dccb-456f-b4ca-0aed1b901c43" containerName="install-os-openstack-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.170270 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="086170f4-76bd-43a5-861d-eca144befed6" containerName="registry-server" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.171035 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.173000 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.173178 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.177535 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.177768 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.182655 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-w46l6"] Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.360651 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.360716 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.361487 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.462912 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.463427 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.464068 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.467175 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.477174 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.483736 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"configure-os-openstack-openstack-cell1-w46l6\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:25 crc kubenswrapper[4902]: I0121 16:40:25.560421 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:40:26 crc kubenswrapper[4902]: I0121 16:40:26.147220 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-w46l6"] Jan 21 16:40:27 crc kubenswrapper[4902]: I0121 16:40:27.096993 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerStarted","Data":"88ff5cbcfa1e27400f64e32a1c15c4a3a56ae22145423ed31c76145fed2fd012"} Jan 21 16:40:28 crc kubenswrapper[4902]: I0121 16:40:28.107889 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerStarted","Data":"eab440740b0a21242af5c0364ac8efd26b6c03943ba49a53c8eef5d719029002"} Jan 21 16:40:28 crc kubenswrapper[4902]: I0121 16:40:28.133883 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" podStartSLOduration=2.395082393 podStartE2EDuration="3.133866298s" podCreationTimestamp="2026-01-21 16:40:25 +0000 UTC" firstStartedPulling="2026-01-21 16:40:26.149122278 +0000 UTC m=+7588.225955307" lastFinishedPulling="2026-01-21 16:40:26.887906153 +0000 UTC m=+7588.964739212" observedRunningTime="2026-01-21 16:40:28.121957431 +0000 UTC m=+7590.198790460" watchObservedRunningTime="2026-01-21 16:40:28.133866298 +0000 UTC m=+7590.210699327" Jan 21 16:40:33 crc kubenswrapper[4902]: I0121 16:40:33.294682 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:33 crc kubenswrapper[4902]: E0121 16:40:33.295706 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:47 crc kubenswrapper[4902]: I0121 16:40:47.294706 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:40:47 crc kubenswrapper[4902]: E0121 16:40:47.295906 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.211709 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.216264 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.252170 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.351364 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.351446 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.351511 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.454290 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.454427 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.454877 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.455776 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.456248 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.488904 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"redhat-operators-st4zd\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:53 crc kubenswrapper[4902]: I0121 16:40:53.566164 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.082065 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.410488 4902 generic.go:334] "Generic (PLEG): container finished" podID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" exitCode=0 Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.410741 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0"} Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.410888 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerStarted","Data":"bb7ccdf2827a1f6dd75b1026bcd57af8e936b222c3cfce0567e1538bfba4bc6e"} Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.611897 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.614658 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.624302 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.695726 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.695888 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.695963 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.798389 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.798525 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.799060 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.799257 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.799579 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.838349 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"certified-operators-mz8dz\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:54 crc kubenswrapper[4902]: I0121 16:40:54.949341 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:40:55 crc kubenswrapper[4902]: I0121 16:40:55.533332 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.433907 4902 generic.go:334] "Generic (PLEG): container finished" podID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerID="14dd86a2e3b7d5c82bcad09aa1bc2117f7eed1b5efb481174f490e8dcceec431" exitCode=0 Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.434215 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"14dd86a2e3b7d5c82bcad09aa1bc2117f7eed1b5efb481174f490e8dcceec431"} Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.434242 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerStarted","Data":"ee8e92833fe775226f0b1a1c483d4e9507784ce3af65b37360c09279b01577fa"} Jan 21 16:40:56 crc kubenswrapper[4902]: I0121 16:40:56.438298 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerStarted","Data":"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509"} Jan 21 16:40:59 crc kubenswrapper[4902]: I0121 16:40:59.468409 4902 generic.go:334] "Generic (PLEG): container finished" podID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" exitCode=0 Jan 21 16:40:59 crc kubenswrapper[4902]: I0121 16:40:59.469081 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509"} Jan 21 16:40:59 crc kubenswrapper[4902]: I0121 16:40:59.476975 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerStarted","Data":"f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8"} Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.504503 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerStarted","Data":"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a"} Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.508403 4902 generic.go:334] "Generic (PLEG): container finished" podID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerID="f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8" exitCode=0 Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.508457 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8"} Jan 21 16:41:01 crc kubenswrapper[4902]: I0121 16:41:01.533420 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-st4zd" podStartSLOduration=2.4120412399999998 podStartE2EDuration="8.5333931s" podCreationTimestamp="2026-01-21 16:40:53 +0000 UTC" firstStartedPulling="2026-01-21 16:40:54.414463042 +0000 UTC m=+7616.491296081" lastFinishedPulling="2026-01-21 16:41:00.535814912 +0000 UTC m=+7622.612647941" observedRunningTime="2026-01-21 16:41:01.52277337 +0000 UTC m=+7623.599606409" watchObservedRunningTime="2026-01-21 16:41:01.5333931 +0000 UTC m=+7623.610226169" Jan 21 16:41:02 crc kubenswrapper[4902]: I0121 16:41:02.295612 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:02 crc kubenswrapper[4902]: E0121 16:41:02.296666 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:02 crc kubenswrapper[4902]: I0121 16:41:02.524035 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerStarted","Data":"1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26"} Jan 21 16:41:02 crc kubenswrapper[4902]: I0121 16:41:02.550185 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mz8dz" podStartSLOduration=3.012806065 podStartE2EDuration="8.550168191s" podCreationTimestamp="2026-01-21 16:40:54 +0000 UTC" firstStartedPulling="2026-01-21 16:40:56.436319893 +0000 UTC m=+7618.513152932" lastFinishedPulling="2026-01-21 16:41:01.973682019 +0000 UTC m=+7624.050515058" observedRunningTime="2026-01-21 16:41:02.549228034 +0000 UTC m=+7624.626061073" watchObservedRunningTime="2026-01-21 16:41:02.550168191 +0000 UTC m=+7624.627001210" Jan 21 16:41:03 crc kubenswrapper[4902]: I0121 16:41:03.566302 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:03 crc kubenswrapper[4902]: I0121 16:41:03.566579 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:04 crc kubenswrapper[4902]: I0121 16:41:04.615870 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-st4zd" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" probeResult="failure" output=< Jan 21 16:41:04 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:41:04 crc kubenswrapper[4902]: > Jan 21 16:41:04 crc kubenswrapper[4902]: I0121 16:41:04.949705 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:04 crc kubenswrapper[4902]: I0121 16:41:04.949770 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:05 crc kubenswrapper[4902]: I0121 16:41:05.026447 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:08 crc kubenswrapper[4902]: I0121 16:41:08.629788 4902 generic.go:334] "Generic (PLEG): container finished" podID="4570bbab-b55a-498c-8276-2c7aa0969540" containerID="eab440740b0a21242af5c0364ac8efd26b6c03943ba49a53c8eef5d719029002" exitCode=2 Jan 21 16:41:08 crc kubenswrapper[4902]: I0121 16:41:08.629890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerDied","Data":"eab440740b0a21242af5c0364ac8efd26b6c03943ba49a53c8eef5d719029002"} Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.093534 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.196427 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") pod \"4570bbab-b55a-498c-8276-2c7aa0969540\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.196653 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") pod \"4570bbab-b55a-498c-8276-2c7aa0969540\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.196805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") pod \"4570bbab-b55a-498c-8276-2c7aa0969540\" (UID: \"4570bbab-b55a-498c-8276-2c7aa0969540\") " Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.202659 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7" (OuterVolumeSpecName: "kube-api-access-tdvm7") pod "4570bbab-b55a-498c-8276-2c7aa0969540" (UID: "4570bbab-b55a-498c-8276-2c7aa0969540"). InnerVolumeSpecName "kube-api-access-tdvm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.225962 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4570bbab-b55a-498c-8276-2c7aa0969540" (UID: "4570bbab-b55a-498c-8276-2c7aa0969540"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.226831 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory" (OuterVolumeSpecName: "inventory") pod "4570bbab-b55a-498c-8276-2c7aa0969540" (UID: "4570bbab-b55a-498c-8276-2c7aa0969540"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.300358 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.300411 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4570bbab-b55a-498c-8276-2c7aa0969540-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.300423 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdvm7\" (UniqueName: \"kubernetes.io/projected/4570bbab-b55a-498c-8276-2c7aa0969540-kube-api-access-tdvm7\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.653255 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" event={"ID":"4570bbab-b55a-498c-8276-2c7aa0969540","Type":"ContainerDied","Data":"88ff5cbcfa1e27400f64e32a1c15c4a3a56ae22145423ed31c76145fed2fd012"} Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.653751 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ff5cbcfa1e27400f64e32a1c15c4a3a56ae22145423ed31c76145fed2fd012" Jan 21 16:41:10 crc kubenswrapper[4902]: I0121 16:41:10.653376 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-w46l6" Jan 21 16:41:13 crc kubenswrapper[4902]: I0121 16:41:13.618133 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:13 crc kubenswrapper[4902]: I0121 16:41:13.665325 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:13 crc kubenswrapper[4902]: I0121 16:41:13.917261 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:41:14 crc kubenswrapper[4902]: I0121 16:41:14.687064 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-st4zd" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" containerID="cri-o://f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" gracePeriod=2 Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.012293 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.172074 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.345455 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") pod \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.345616 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") pod \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.346107 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") pod \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\" (UID: \"4bf7e55d-ec94-44b6-96c2-04452baeb3b6\") " Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.347384 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities" (OuterVolumeSpecName: "utilities") pod "4bf7e55d-ec94-44b6-96c2-04452baeb3b6" (UID: "4bf7e55d-ec94-44b6-96c2-04452baeb3b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.350823 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx" (OuterVolumeSpecName: "kube-api-access-hrcfx") pod "4bf7e55d-ec94-44b6-96c2-04452baeb3b6" (UID: "4bf7e55d-ec94-44b6-96c2-04452baeb3b6"). InnerVolumeSpecName "kube-api-access-hrcfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.449162 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrcfx\" (UniqueName: \"kubernetes.io/projected/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-kube-api-access-hrcfx\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.449195 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.473551 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf7e55d-ec94-44b6-96c2-04452baeb3b6" (UID: "4bf7e55d-ec94-44b6-96c2-04452baeb3b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.551085 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf7e55d-ec94-44b6-96c2-04452baeb3b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.696919 4902 generic.go:334] "Generic (PLEG): container finished" podID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" exitCode=0 Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.696961 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a"} Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.696995 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-st4zd" event={"ID":"4bf7e55d-ec94-44b6-96c2-04452baeb3b6","Type":"ContainerDied","Data":"bb7ccdf2827a1f6dd75b1026bcd57af8e936b222c3cfce0567e1538bfba4bc6e"} Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.697013 4902 scope.go:117] "RemoveContainer" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.697160 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-st4zd" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.830390 4902 scope.go:117] "RemoveContainer" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.837427 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.847460 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-st4zd"] Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.852966 4902 scope.go:117] "RemoveContainer" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.922073 4902 scope.go:117] "RemoveContainer" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" Jan 21 16:41:15 crc kubenswrapper[4902]: E0121 16:41:15.922555 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a\": container with ID starting with f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a not found: ID does not exist" containerID="f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.922595 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a"} err="failed to get container status \"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a\": rpc error: code = NotFound desc = could not find container \"f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a\": container with ID starting with f6b52bf48713540b124a970bdb374a82a9fdda21f79f3b003d53dfab3f77cf0a not found: ID does not exist" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.922635 4902 scope.go:117] "RemoveContainer" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" Jan 21 16:41:15 crc kubenswrapper[4902]: E0121 16:41:15.922969 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509\": container with ID starting with f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509 not found: ID does not exist" containerID="f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.923021 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509"} err="failed to get container status \"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509\": rpc error: code = NotFound desc = could not find container \"f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509\": container with ID starting with f37ae92ee1943e1f682d72ce1930fcb0235222a6cd05d7e2eee50137e7204509 not found: ID does not exist" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.923073 4902 scope.go:117] "RemoveContainer" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" Jan 21 16:41:15 crc kubenswrapper[4902]: E0121 16:41:15.923473 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0\": container with ID starting with 31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0 not found: ID does not exist" containerID="31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0" Jan 21 16:41:15 crc kubenswrapper[4902]: I0121 16:41:15.923506 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0"} err="failed to get container status \"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0\": rpc error: code = NotFound desc = could not find container \"31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0\": container with ID starting with 31ab11aa334b47f67e0534f5f80d282b7130b052e6114028cd006aa685450da0 not found: ID does not exist" Jan 21 16:41:16 crc kubenswrapper[4902]: I0121 16:41:16.314295 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" path="/var/lib/kubelet/pods/4bf7e55d-ec94-44b6-96c2-04452baeb3b6/volumes" Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.331386 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:17 crc kubenswrapper[4902]: E0121 16:41:17.331876 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.338791 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.339056 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mz8dz" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" containerID="cri-o://1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26" gracePeriod=2 Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.718268 4902 generic.go:334] "Generic (PLEG): container finished" podID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerID="1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26" exitCode=0 Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.718322 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26"} Jan 21 16:41:17 crc kubenswrapper[4902]: I0121 16:41:17.889127 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.029418 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2qbs2"] Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030119 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030136 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030148 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030155 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030199 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030206 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030223 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4570bbab-b55a-498c-8276-2c7aa0969540" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030230 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4570bbab-b55a-498c-8276-2c7aa0969540" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030241 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030247 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-content" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030258 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030264 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: E0121 16:41:18.030276 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030283 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="extract-utilities" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030458 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf7e55d-ec94-44b6-96c2-04452baeb3b6" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030470 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4570bbab-b55a-498c-8276-2c7aa0969540" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.030490 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" containerName="registry-server" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.031244 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.034273 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.034718 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.034877 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.035029 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.045461 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2qbs2"] Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.061332 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") pod \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.061435 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") pod \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.061621 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") pod \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\" (UID: \"b65e05c0-de41-4782-ba9c-a82a8ab0f83a\") " Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.063680 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities" (OuterVolumeSpecName: "utilities") pod "b65e05c0-de41-4782-ba9c-a82a8ab0f83a" (UID: "b65e05c0-de41-4782-ba9c-a82a8ab0f83a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.072887 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd" (OuterVolumeSpecName: "kube-api-access-q6tqd") pod "b65e05c0-de41-4782-ba9c-a82a8ab0f83a" (UID: "b65e05c0-de41-4782-ba9c-a82a8ab0f83a"). InnerVolumeSpecName "kube-api-access-q6tqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.133138 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b65e05c0-de41-4782-ba9c-a82a8ab0f83a" (UID: "b65e05c0-de41-4782-ba9c-a82a8ab0f83a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.164694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.164928 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165083 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165207 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165238 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6tqd\" (UniqueName: \"kubernetes.io/projected/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-kube-api-access-q6tqd\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.165251 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65e05c0-de41-4782-ba9c-a82a8ab0f83a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.266754 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.266899 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.266959 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.270652 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.271626 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.283870 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"configure-os-openstack-openstack-cell1-2qbs2\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.434328 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.786403 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mz8dz" event={"ID":"b65e05c0-de41-4782-ba9c-a82a8ab0f83a","Type":"ContainerDied","Data":"ee8e92833fe775226f0b1a1c483d4e9507784ce3af65b37360c09279b01577fa"} Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.786670 4902 scope.go:117] "RemoveContainer" containerID="1d8a9965a9fa69ca3a927f548d26553ae82c7a81151442950d884266cba4af26" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.786872 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mz8dz" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.829317 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.841021 4902 scope.go:117] "RemoveContainer" containerID="f2a60e3589ca79881dcfa599a9ffc2679c7179019127fdf6fe4134dd2dc99dd8" Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.845948 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mz8dz"] Jan 21 16:41:18 crc kubenswrapper[4902]: I0121 16:41:18.866618 4902 scope.go:117] "RemoveContainer" containerID="14dd86a2e3b7d5c82bcad09aa1bc2117f7eed1b5efb481174f490e8dcceec431" Jan 21 16:41:19 crc kubenswrapper[4902]: I0121 16:41:19.263395 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-2qbs2"] Jan 21 16:41:19 crc kubenswrapper[4902]: I0121 16:41:19.797136 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerStarted","Data":"42b3b08c13fd764f45a6ac87813e34b5bd5db9232e160886c1e404847f1ca7b7"} Jan 21 16:41:20 crc kubenswrapper[4902]: I0121 16:41:20.313858 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65e05c0-de41-4782-ba9c-a82a8ab0f83a" path="/var/lib/kubelet/pods/b65e05c0-de41-4782-ba9c-a82a8ab0f83a/volumes" Jan 21 16:41:20 crc kubenswrapper[4902]: I0121 16:41:20.807414 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerStarted","Data":"b0ea5beb9b05a87f4ba1b2af835803387b11edca7ca299b52d070d2d5b519bdd"} Jan 21 16:41:20 crc kubenswrapper[4902]: I0121 16:41:20.835663 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" podStartSLOduration=2.374570648 podStartE2EDuration="2.835635605s" podCreationTimestamp="2026-01-21 16:41:18 +0000 UTC" firstStartedPulling="2026-01-21 16:41:19.272620687 +0000 UTC m=+7641.349453716" lastFinishedPulling="2026-01-21 16:41:19.733685624 +0000 UTC m=+7641.810518673" observedRunningTime="2026-01-21 16:41:20.825016694 +0000 UTC m=+7642.901849733" watchObservedRunningTime="2026-01-21 16:41:20.835635605 +0000 UTC m=+7642.912468634" Jan 21 16:41:32 crc kubenswrapper[4902]: I0121 16:41:32.295635 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:32 crc kubenswrapper[4902]: E0121 16:41:32.296436 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:46 crc kubenswrapper[4902]: I0121 16:41:46.295596 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:41:46 crc kubenswrapper[4902]: E0121 16:41:46.296377 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:41:56 crc kubenswrapper[4902]: I0121 16:41:56.171217 4902 generic.go:334] "Generic (PLEG): container finished" podID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerID="b0ea5beb9b05a87f4ba1b2af835803387b11edca7ca299b52d070d2d5b519bdd" exitCode=2 Jan 21 16:41:56 crc kubenswrapper[4902]: I0121 16:41:56.171328 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerDied","Data":"b0ea5beb9b05a87f4ba1b2af835803387b11edca7ca299b52d070d2d5b519bdd"} Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.635437 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.749143 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") pod \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.749301 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") pod \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.749429 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") pod \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\" (UID: \"7ddf7812-c5ee-4c59-ad12-19f7b1a00442\") " Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.755425 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24" (OuterVolumeSpecName: "kube-api-access-nkq24") pod "7ddf7812-c5ee-4c59-ad12-19f7b1a00442" (UID: "7ddf7812-c5ee-4c59-ad12-19f7b1a00442"). InnerVolumeSpecName "kube-api-access-nkq24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.785527 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory" (OuterVolumeSpecName: "inventory") pod "7ddf7812-c5ee-4c59-ad12-19f7b1a00442" (UID: "7ddf7812-c5ee-4c59-ad12-19f7b1a00442"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.797351 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7ddf7812-c5ee-4c59-ad12-19f7b1a00442" (UID: "7ddf7812-c5ee-4c59-ad12-19f7b1a00442"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.852830 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkq24\" (UniqueName: \"kubernetes.io/projected/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-kube-api-access-nkq24\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.852881 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:57 crc kubenswrapper[4902]: I0121 16:41:57.852900 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ddf7812-c5ee-4c59-ad12-19f7b1a00442-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:58 crc kubenswrapper[4902]: I0121 16:41:58.217965 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" event={"ID":"7ddf7812-c5ee-4c59-ad12-19f7b1a00442","Type":"ContainerDied","Data":"42b3b08c13fd764f45a6ac87813e34b5bd5db9232e160886c1e404847f1ca7b7"} Jan 21 16:41:58 crc kubenswrapper[4902]: I0121 16:41:58.218008 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b3b08c13fd764f45a6ac87813e34b5bd5db9232e160886c1e404847f1ca7b7" Jan 21 16:41:58 crc kubenswrapper[4902]: I0121 16:41:58.218079 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-2qbs2" Jan 21 16:42:00 crc kubenswrapper[4902]: I0121 16:42:00.295533 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:00 crc kubenswrapper[4902]: E0121 16:42:00.296497 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:15 crc kubenswrapper[4902]: I0121 16:42:15.295465 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:15 crc kubenswrapper[4902]: E0121 16:42:15.296756 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.046249 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-crnmm"] Jan 21 16:42:16 crc kubenswrapper[4902]: E0121 16:42:16.046758 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.046778 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.047142 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ddf7812-c5ee-4c59-ad12-19f7b1a00442" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.048286 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.052522 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.052648 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.052900 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.053228 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.072173 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-crnmm"] Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.214349 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.214769 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.214833 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.317352 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.318546 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.318762 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.328656 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.334329 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.355232 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"configure-os-openstack-openstack-cell1-crnmm\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.386302 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:16 crc kubenswrapper[4902]: I0121 16:42:16.939994 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-crnmm"] Jan 21 16:42:16 crc kubenswrapper[4902]: W0121 16:42:16.948001 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d010648_1998_4311_917b_20626c2f5586.slice/crio-5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7 WatchSource:0}: Error finding container 5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7: Status 404 returned error can't find the container with id 5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7 Jan 21 16:42:17 crc kubenswrapper[4902]: I0121 16:42:17.414580 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerStarted","Data":"5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7"} Jan 21 16:42:18 crc kubenswrapper[4902]: I0121 16:42:18.424198 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerStarted","Data":"70faaab266dd818acbdadfb66ada41235c8ee46467514dc67255ecaa970bb0ce"} Jan 21 16:42:18 crc kubenswrapper[4902]: I0121 16:42:18.445329 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" podStartSLOduration=2.030778996 podStartE2EDuration="2.445313435s" podCreationTimestamp="2026-01-21 16:42:16 +0000 UTC" firstStartedPulling="2026-01-21 16:42:16.950647182 +0000 UTC m=+7699.027480221" lastFinishedPulling="2026-01-21 16:42:17.365181631 +0000 UTC m=+7699.442014660" observedRunningTime="2026-01-21 16:42:18.443698949 +0000 UTC m=+7700.520531988" watchObservedRunningTime="2026-01-21 16:42:18.445313435 +0000 UTC m=+7700.522146464" Jan 21 16:42:29 crc kubenswrapper[4902]: I0121 16:42:29.294457 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:29 crc kubenswrapper[4902]: E0121 16:42:29.295291 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:40 crc kubenswrapper[4902]: I0121 16:42:40.297084 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:40 crc kubenswrapper[4902]: E0121 16:42:40.297896 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:52 crc kubenswrapper[4902]: I0121 16:42:52.294833 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:42:52 crc kubenswrapper[4902]: E0121 16:42:52.295688 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:42:56 crc kubenswrapper[4902]: I0121 16:42:56.828962 4902 generic.go:334] "Generic (PLEG): container finished" podID="9d010648-1998-4311-917b-20626c2f5586" containerID="70faaab266dd818acbdadfb66ada41235c8ee46467514dc67255ecaa970bb0ce" exitCode=2 Jan 21 16:42:56 crc kubenswrapper[4902]: I0121 16:42:56.829082 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerDied","Data":"70faaab266dd818acbdadfb66ada41235c8ee46467514dc67255ecaa970bb0ce"} Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.331375 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.483517 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") pod \"9d010648-1998-4311-917b-20626c2f5586\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.483704 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") pod \"9d010648-1998-4311-917b-20626c2f5586\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.485094 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") pod \"9d010648-1998-4311-917b-20626c2f5586\" (UID: \"9d010648-1998-4311-917b-20626c2f5586\") " Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.492340 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh" (OuterVolumeSpecName: "kube-api-access-c5bdh") pod "9d010648-1998-4311-917b-20626c2f5586" (UID: "9d010648-1998-4311-917b-20626c2f5586"). InnerVolumeSpecName "kube-api-access-c5bdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.524471 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9d010648-1998-4311-917b-20626c2f5586" (UID: "9d010648-1998-4311-917b-20626c2f5586"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.526241 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory" (OuterVolumeSpecName: "inventory") pod "9d010648-1998-4311-917b-20626c2f5586" (UID: "9d010648-1998-4311-917b-20626c2f5586"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.587799 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bdh\" (UniqueName: \"kubernetes.io/projected/9d010648-1998-4311-917b-20626c2f5586-kube-api-access-c5bdh\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.587842 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.587857 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d010648-1998-4311-917b-20626c2f5586-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.854674 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" event={"ID":"9d010648-1998-4311-917b-20626c2f5586","Type":"ContainerDied","Data":"5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7"} Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.854715 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-crnmm" Jan 21 16:42:58 crc kubenswrapper[4902]: I0121 16:42:58.854719 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9d1edc35380fe37afa72007a3a41355f5f7dc6a6b0f3579382af699a5d5cb7" Jan 21 16:43:07 crc kubenswrapper[4902]: I0121 16:43:07.296120 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:43:07 crc kubenswrapper[4902]: E0121 16:43:07.297143 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:43:18 crc kubenswrapper[4902]: I0121 16:43:18.301697 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:43:19 crc kubenswrapper[4902]: I0121 16:43:19.062541 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030"} Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.050179 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-755fc"] Jan 21 16:43:36 crc kubenswrapper[4902]: E0121 16:43:36.051643 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d010648-1998-4311-917b-20626c2f5586" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.051662 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d010648-1998-4311-917b-20626c2f5586" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.051933 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d010648-1998-4311-917b-20626c2f5586" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.053876 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.058006 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.058439 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.058522 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-c55r2" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.060135 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-755fc"] Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.063762 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.108693 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.109031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.109111 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.211401 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.211565 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.213643 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.218495 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.219474 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.239240 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"configure-os-openstack-openstack-cell1-755fc\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.392150 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:43:36 crc kubenswrapper[4902]: I0121 16:43:36.945365 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-755fc"] Jan 21 16:43:36 crc kubenswrapper[4902]: W0121 16:43:36.946429 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d7d4592_eaab_4fdb_a63f_6b92285b1129.slice/crio-7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615 WatchSource:0}: Error finding container 7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615: Status 404 returned error can't find the container with id 7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615 Jan 21 16:43:37 crc kubenswrapper[4902]: I0121 16:43:37.232417 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerStarted","Data":"7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615"} Jan 21 16:43:38 crc kubenswrapper[4902]: I0121 16:43:38.244303 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerStarted","Data":"a03258a4cb27bf6df46d45d9414de4b0caf988d4d615537ec950490a5b51869c"} Jan 21 16:43:38 crc kubenswrapper[4902]: I0121 16:43:38.274279 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-755fc" podStartSLOduration=1.634461135 podStartE2EDuration="2.274253199s" podCreationTimestamp="2026-01-21 16:43:36 +0000 UTC" firstStartedPulling="2026-01-21 16:43:36.949734651 +0000 UTC m=+7779.026567670" lastFinishedPulling="2026-01-21 16:43:37.589526705 +0000 UTC m=+7779.666359734" observedRunningTime="2026-01-21 16:43:38.264017629 +0000 UTC m=+7780.340850668" watchObservedRunningTime="2026-01-21 16:43:38.274253199 +0000 UTC m=+7780.351086228" Jan 21 16:44:12 crc kubenswrapper[4902]: I0121 16:44:12.581298 4902 generic.go:334] "Generic (PLEG): container finished" podID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerID="a03258a4cb27bf6df46d45d9414de4b0caf988d4d615537ec950490a5b51869c" exitCode=2 Jan 21 16:44:12 crc kubenswrapper[4902]: I0121 16:44:12.581353 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerDied","Data":"a03258a4cb27bf6df46d45d9414de4b0caf988d4d615537ec950490a5b51869c"} Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.507441 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.613025 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") pod \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.613094 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") pod \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.613146 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") pod \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\" (UID: \"1d7d4592-eaab-4fdb-a63f-6b92285b1129\") " Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.624670 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7" (OuterVolumeSpecName: "kube-api-access-cv6x7") pod "1d7d4592-eaab-4fdb-a63f-6b92285b1129" (UID: "1d7d4592-eaab-4fdb-a63f-6b92285b1129"). InnerVolumeSpecName "kube-api-access-cv6x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.637433 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-755fc" event={"ID":"1d7d4592-eaab-4fdb-a63f-6b92285b1129","Type":"ContainerDied","Data":"7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615"} Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.637477 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7606ed6c8b018995a24c565b45d1cfab6b7e8fc84d30b4a5dd724907ef6dc615" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.637494 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-755fc" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.656072 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1d7d4592-eaab-4fdb-a63f-6b92285b1129" (UID: "1d7d4592-eaab-4fdb-a63f-6b92285b1129"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.675029 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory" (OuterVolumeSpecName: "inventory") pod "1d7d4592-eaab-4fdb-a63f-6b92285b1129" (UID: "1d7d4592-eaab-4fdb-a63f-6b92285b1129"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.715262 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6x7\" (UniqueName: \"kubernetes.io/projected/1d7d4592-eaab-4fdb-a63f-6b92285b1129-kube-api-access-cv6x7\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.715303 4902 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:44:14 crc kubenswrapper[4902]: I0121 16:44:14.715316 4902 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d7d4592-eaab-4fdb-a63f-6b92285b1129-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.212316 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 16:45:00 crc kubenswrapper[4902]: E0121 16:45:00.213754 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.213772 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.214022 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7d4592-eaab-4fdb-a63f-6b92285b1129" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.214838 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.217109 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.217328 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.233158 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.394581 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.394680 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.394853 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.497185 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.497292 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.497369 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.498656 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.503076 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.515458 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"collect-profiles-29483565-7hzv8\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:00 crc kubenswrapper[4902]: I0121 16:45:00.534168 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:01 crc kubenswrapper[4902]: I0121 16:45:01.014984 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 16:45:01 crc kubenswrapper[4902]: I0121 16:45:01.186625 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" event={"ID":"504c5756-9427-4037-be3a-481fc1e8715f","Type":"ContainerStarted","Data":"281062accf2e074810f33f20bc1bf88bb0df7e0affa7d8b2581066f49b358d78"} Jan 21 16:45:02 crc kubenswrapper[4902]: I0121 16:45:02.198800 4902 generic.go:334] "Generic (PLEG): container finished" podID="504c5756-9427-4037-be3a-481fc1e8715f" containerID="aa3c7bb404afe310e56cb2617f84d467c8f578e09af1f3e30d342fd88646315e" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4902]: I0121 16:45:02.198903 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" event={"ID":"504c5756-9427-4037-be3a-481fc1e8715f","Type":"ContainerDied","Data":"aa3c7bb404afe310e56cb2617f84d467c8f578e09af1f3e30d342fd88646315e"} Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.518679 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.668498 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") pod \"504c5756-9427-4037-be3a-481fc1e8715f\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.668569 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") pod \"504c5756-9427-4037-be3a-481fc1e8715f\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.668846 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") pod \"504c5756-9427-4037-be3a-481fc1e8715f\" (UID: \"504c5756-9427-4037-be3a-481fc1e8715f\") " Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.669093 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume" (OuterVolumeSpecName: "config-volume") pod "504c5756-9427-4037-be3a-481fc1e8715f" (UID: "504c5756-9427-4037-be3a-481fc1e8715f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.669456 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/504c5756-9427-4037-be3a-481fc1e8715f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.673549 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "504c5756-9427-4037-be3a-481fc1e8715f" (UID: "504c5756-9427-4037-be3a-481fc1e8715f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.674171 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2" (OuterVolumeSpecName: "kube-api-access-4vvx2") pod "504c5756-9427-4037-be3a-481fc1e8715f" (UID: "504c5756-9427-4037-be3a-481fc1e8715f"). InnerVolumeSpecName "kube-api-access-4vvx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.771625 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vvx2\" (UniqueName: \"kubernetes.io/projected/504c5756-9427-4037-be3a-481fc1e8715f-kube-api-access-4vvx2\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4902]: I0121 16:45:03.771684 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/504c5756-9427-4037-be3a-481fc1e8715f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.225476 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" event={"ID":"504c5756-9427-4037-be3a-481fc1e8715f","Type":"ContainerDied","Data":"281062accf2e074810f33f20bc1bf88bb0df7e0affa7d8b2581066f49b358d78"} Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.225522 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281062accf2e074810f33f20bc1bf88bb0df7e0affa7d8b2581066f49b358d78" Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.225555 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8" Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.586527 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:45:04 crc kubenswrapper[4902]: I0121 16:45:04.595166 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-jmqbp"] Jan 21 16:45:06 crc kubenswrapper[4902]: I0121 16:45:06.307136 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f705e9e-4608-4e35-9f28-665a52f2aba6" path="/var/lib/kubelet/pods/2f705e9e-4608-4e35-9f28-665a52f2aba6/volumes" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.785523 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/must-gather-sx2rp"] Jan 21 16:45:35 crc kubenswrapper[4902]: E0121 16:45:35.786382 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504c5756-9427-4037-be3a-481fc1e8715f" containerName="collect-profiles" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.786425 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="504c5756-9427-4037-be3a-481fc1e8715f" containerName="collect-profiles" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.786637 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="504c5756-9427-4037-be3a-481fc1e8715f" containerName="collect-profiles" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.788166 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.792943 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-649bt"/"openshift-service-ca.crt" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.793098 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-649bt"/"kube-root-ca.crt" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.793238 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-649bt"/"default-dockercfg-lrpnl" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.811523 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-649bt/must-gather-sx2rp"] Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.923939 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzhl\" (UniqueName: \"kubernetes.io/projected/7f3b3035-07fa-46da-ba99-74131a56f5b2-kube-api-access-jnzhl\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:35 crc kubenswrapper[4902]: I0121 16:45:35.924163 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3b3035-07fa-46da-ba99-74131a56f5b2-must-gather-output\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.025608 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnzhl\" (UniqueName: \"kubernetes.io/projected/7f3b3035-07fa-46da-ba99-74131a56f5b2-kube-api-access-jnzhl\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.025819 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3b3035-07fa-46da-ba99-74131a56f5b2-must-gather-output\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.026816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7f3b3035-07fa-46da-ba99-74131a56f5b2-must-gather-output\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.053008 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnzhl\" (UniqueName: \"kubernetes.io/projected/7f3b3035-07fa-46da-ba99-74131a56f5b2-kube-api-access-jnzhl\") pod \"must-gather-sx2rp\" (UID: \"7f3b3035-07fa-46da-ba99-74131a56f5b2\") " pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.110666 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/must-gather-sx2rp" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.210368 4902 scope.go:117] "RemoveContainer" containerID="32fe8ff5a7cc5267205a3f1e8b759ee5d99a41ef6bca9732cd6d5478ff974b57" Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.644704 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-649bt/must-gather-sx2rp"] Jan 21 16:45:36 crc kubenswrapper[4902]: I0121 16:45:36.648351 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:45:37 crc kubenswrapper[4902]: I0121 16:45:37.557797 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/must-gather-sx2rp" event={"ID":"7f3b3035-07fa-46da-ba99-74131a56f5b2","Type":"ContainerStarted","Data":"14a2be8de840189356082d6093558c67e6bee1d9a733fe7d50bb5332aa247b58"} Jan 21 16:45:43 crc kubenswrapper[4902]: I0121 16:45:43.618789 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/must-gather-sx2rp" event={"ID":"7f3b3035-07fa-46da-ba99-74131a56f5b2","Type":"ContainerStarted","Data":"3c104b5b2ba1ed6d0cdf612cdf265bf9d9f9732077b75589cf5586c334a05bbb"} Jan 21 16:45:43 crc kubenswrapper[4902]: I0121 16:45:43.619232 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/must-gather-sx2rp" event={"ID":"7f3b3035-07fa-46da-ba99-74131a56f5b2","Type":"ContainerStarted","Data":"ccfea4af07fa6cc489b0aec7bd59161ebb73b38f88167db3fd5bcc73fa7d7e58"} Jan 21 16:45:43 crc kubenswrapper[4902]: I0121 16:45:43.640393 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-649bt/must-gather-sx2rp" podStartSLOduration=2.24440557 podStartE2EDuration="8.64037188s" podCreationTimestamp="2026-01-21 16:45:35 +0000 UTC" firstStartedPulling="2026-01-21 16:45:36.647952974 +0000 UTC m=+7898.724786003" lastFinishedPulling="2026-01-21 16:45:43.043919284 +0000 UTC m=+7905.120752313" observedRunningTime="2026-01-21 16:45:43.634102403 +0000 UTC m=+7905.710935442" watchObservedRunningTime="2026-01-21 16:45:43.64037188 +0000 UTC m=+7905.717204919" Jan 21 16:45:47 crc kubenswrapper[4902]: E0121 16:45:47.252236 4902 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.21:59922->38.129.56.21:44701: write tcp 38.129.56.21:59922->38.129.56.21:44701: write: connection reset by peer Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.769850 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.770201 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.830192 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/crc-debug-4hbxl"] Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.831544 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.893287 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.893396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.995830 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.995905 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:47 crc kubenswrapper[4902]: I0121 16:45:47.996064 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:48 crc kubenswrapper[4902]: I0121 16:45:48.018816 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"crc-debug-4hbxl\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:48 crc kubenswrapper[4902]: I0121 16:45:48.151540 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:45:48 crc kubenswrapper[4902]: W0121 16:45:48.190833 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb88f391_e5de_44fc_8bb9_7d7b4bddd96d.slice/crio-f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16 WatchSource:0}: Error finding container f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16: Status 404 returned error can't find the container with id f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16 Jan 21 16:45:48 crc kubenswrapper[4902]: I0121 16:45:48.667532 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-4hbxl" event={"ID":"db88f391-e5de-44fc-8bb9-7d7b4bddd96d","Type":"ContainerStarted","Data":"f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16"} Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.524559 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_657b791a-81e2-483e-8ae9-b261f3bc0c41/alertmanager/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.533292 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_657b791a-81e2-483e-8ae9-b261f3bc0c41/config-reloader/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.540288 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_657b791a-81e2-483e-8ae9-b261f3bc0c41/init-config-reloader/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.577402 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-api/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.596410 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-evaluator/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.610266 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-notifier/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.625073 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_da0893d4-ad82-4a00-8ccf-5e33ead4d85d/aodh-listener/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.648789 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85645f8dd4-bf5z5_49dfaf72-0f35-4705-a9d8-830878fc46d1/barbican-api-log/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.656173 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85645f8dd4-bf5z5_49dfaf72-0f35-4705-a9d8-830878fc46d1/barbican-api/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.691021 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8458cc5fd6-z5j6z_95cef3f6-598c-483e-b2b6-bb3d2942f18e/barbican-keystone-listener-log/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.697816 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8458cc5fd6-z5j6z_95cef3f6-598c-483e-b2b6-bb3d2942f18e/barbican-keystone-listener/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.715153 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c94b5b747-nxfg6_9162d3ad-8f1a-4998-9f4d-a1869af6a23f/barbican-worker-log/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.723035 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c94b5b747-nxfg6_9162d3ad-8f1a-4998-9f4d-a1869af6a23f/barbican-worker/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.758184 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-zwtbg_03ebbaac-5961-4e6e-8709-93bb85975c9c/bootstrap-openstack-openstack-cell1/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.802882 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/ceilometer-central-agent/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.911606 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/ceilometer-notification-agent/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.937924 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/sg-core/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.958141 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_57c9a2f0-4583-4438-b35f-f92aa9a7efe8/proxy-httpd/0.log" Jan 21 16:45:50 crc kubenswrapper[4902]: I0121 16:45:50.985779 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_24d9842a-4646-47c5-a81c-18e641f7617f/cinder-api-log/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.052657 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_24d9842a-4646-47c5-a81c-18e641f7617f/cinder-api/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.081249 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_16354b62-7b74-468c-8953-3a41b1dc1a66/cinder-scheduler/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.117596 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_16354b62-7b74-468c-8953-3a41b1dc1a66/probe/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.135603 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-jgd86_2418bfc5-bf9b-4397-bc7f-20aa86aa582a/configure-network-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.162169 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-2qbs2_7ddf7812-c5ee-4c59-ad12-19f7b1a00442/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.184687 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-755fc_1d7d4592-eaab-4fdb-a63f-6b92285b1129/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.203159 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-crnmm_9d010648-1998-4311-917b-20626c2f5586/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.277020 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-w46l6_4570bbab-b55a-498c-8276-2c7aa0969540/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.302319 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4c775f77-hlsqd_45e057f7-f682-43f2-a02c-effad070763f/dnsmasq-dns/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.316550 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4c775f77-hlsqd_45e057f7-f682-43f2-a02c-effad070763f/init/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.523472 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-lvw72_d171dc59-1575-4895-b80f-0886e901b704/download-cache-openstack-openstack-cell1/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.537455 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43059835-649d-40c9-bf13-f46c9d6b65a6/glance-log/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.683385 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43059835-649d-40c9-bf13-f46c9d6b65a6/glance-httpd/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.708618 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a21c1b8f-59f7-445b-bc8a-f8e89d7142e5/glance-log/0.log" Jan 21 16:45:51 crc kubenswrapper[4902]: I0121 16:45:51.783993 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a21c1b8f-59f7-445b-bc8a-f8e89d7142e5/glance-httpd/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.026213 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-575dc5884b-mwxz4_9bfec31e-5cec-4820-9f26-34413330e44c/heat-api/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.299582 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5c8d887b44-lnw77_5acd47b5-1a65-41c3-af06-401bd9880c1f/heat-cfnapi/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.317930 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-68647965fb-5bvjr_bb701a34-be50-44cd-b277-b687e8499664/heat-engine/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.554865 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6845bd7746-jd2dk_d71e079c-1163-4e7e-ac94-0e92a0b602ad/horizon-log/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.645901 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6845bd7746-jd2dk_d71e079c-1163-4e7e-ac94-0e92a0b602ad/horizon/0.log" Jan 21 16:45:52 crc kubenswrapper[4902]: I0121 16:45:52.669987 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-7xpxk_e253be6c-dccb-456f-b4ca-0aed1b901c43/install-os-openstack-openstack-cell1/0.log" Jan 21 16:45:53 crc kubenswrapper[4902]: I0121 16:45:53.395195 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67bfc4c47-flndt_1bc7e490-49b1-4eef-ab29-4453235cf752/keystone-api/0.log" Jan 21 16:45:53 crc kubenswrapper[4902]: I0121 16:45:53.406806 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c4fb45d4-a64d-4e42-86b5-9e3924f0f877/kube-state-metrics/0.log" Jan 21 16:45:53 crc kubenswrapper[4902]: I0121 16:45:53.416012 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_45f02625-70e9-48ec-8dd4-a0bd456a283b/adoption/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.533366 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_32eae2d9-5b57-4ae9-8451-fa00bd7be443/memcached/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.593463 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66b9c9869c-btkxh_565a7068-4930-41e5-99bb-a08376495b63/neutron-api/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.638149 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66b9c9869c-btkxh_565a7068-4930-41e5-99bb-a08376495b63/neutron-httpd/0.log" Jan 21 16:45:55 crc kubenswrapper[4902]: I0121 16:45:55.740325 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8603f024-f71f-486b-93aa-e6397021aa48/nova-api-log/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.122290 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8603f024-f71f-486b-93aa-e6397021aa48/nova-api-api/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.209825 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_61fa221c-a236-471b-a3ca-0efc339d0fcc/nova-cell0-conductor-conductor/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.280918 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f7b3d3ef-1806-4318-95f7-eb9cd2526d32/nova-cell1-conductor-conductor/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.362000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_78825018-5d0a-4fe7-83c7-ef79700642cd/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:45:56 crc kubenswrapper[4902]: I0121 16:45:56.435951 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_98338524-801f-465f-8845-1d061027c735/nova-metadata-log/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.414053 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_98338524-801f-465f-8845-1d061027c735/nova-metadata-metadata/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.523397 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6d12c9a0-2841-4a53-abd3-0cdb15d404fb/nova-scheduler-scheduler/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.802453 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-77584c4dc-lmbjv_441cf475-eec9-4cee-84ab-7807e9ab0b75/octavia-api/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.827094 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-77584c4dc-lmbjv_441cf475-eec9-4cee-84ab-7807e9ab0b75/octavia-api-provider-agent/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.847267 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-77584c4dc-lmbjv_441cf475-eec9-4cee-84ab-7807e9ab0b75/init/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.910841 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-vtnkx_e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39/octavia-healthmanager/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.918481 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-vtnkx_e3d9ba4c-aca9-49ec-ace9-a1ddcff63f39/init/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.947703 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pr9tl_34cb5d58-0b3f-40eb-a5ee-b8ab812c8008/octavia-housekeeping/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.957030 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-pr9tl_34cb5d58-0b3f-40eb-a5ee-b8ab812c8008/init/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.967634 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-kn74s_802fca2f-9dae-4f46-aaf3-c688c8ebbdfb/octavia-rsyslog/0.log" Jan 21 16:45:57 crc kubenswrapper[4902]: I0121 16:45:57.982810 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-kn74s_802fca2f-9dae-4f46-aaf3-c688c8ebbdfb/init/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.096027 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-drv9p_646b20f3-5a05-4352-9645-69bed7f67dae/octavia-worker/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.108438 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-drv9p_646b20f3-5a05-4352-9645-69bed7f67dae/init/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.131754 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a211ebd7-f82f-4cc7-91d3-77ec265a5d11/galera/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.149801 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a211ebd7-f82f-4cc7-91d3-77ec265a5d11/mysql-bootstrap/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.172950 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a02660d2-21f1-4d0b-9351-efc03413d6f8/galera/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.183270 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a02660d2-21f1-4d0b-9351-efc03413d6f8/mysql-bootstrap/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.197226 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_052c7402-6934-4f86-bb78-e83d7da3b587/openstackclient/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.216649 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9bqlx_cc475055-769c-4199-8486-3bdca7cd05bc/ovn-controller/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.252747 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9vx8r_8f209787-a9f8-41df-8298-79c1381eecbb/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.270595 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfhz4_d120f671-59d9-42ef-a905-2a6203c5896c/ovsdb-server/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.284062 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfhz4_d120f671-59d9-42ef-a905-2a6203c5896c/ovs-vswitchd/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.290017 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfhz4_d120f671-59d9-42ef-a905-2a6203c5896c/ovsdb-server-init/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.309226 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_15260f61-f63b-48cf-8c1d-1269ed5264d6/adoption/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.322446 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b8db1a8e-13c3-41be-9f21-24077d0e4e29/ovn-northd/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.327026 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b8db1a8e-13c3-41be-9f21-24077d0e4e29/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.643646 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_52b530ea-b7ee-4420-a3d6-d140ac75c474/ovsdbserver-nb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.650816 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_52b530ea-b7ee-4420-a3d6-d140ac75c474/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.667000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_fcf74aba-3fc7-42ea-9537-a176dbf2a2e2/ovsdbserver-nb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.672707 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_fcf74aba-3fc7-42ea-9537-a176dbf2a2e2/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.688240 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_69d6d956-f400-4339-8b68-c2644bb9b9eb/ovsdbserver-nb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.692712 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_69d6d956-f400-4339-8b68-c2644bb9b9eb/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.708285 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fa609e80-09d5-4393-a79f-9989f9223bdd/ovsdbserver-sb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.712867 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fa609e80-09d5-4393-a79f-9989f9223bdd/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.732955 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_51aa3a3a-61f9-4757-b302-aa170904d97f/ovsdbserver-sb/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.738415 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_51aa3a3a-61f9-4757-b302-aa170904d97f/openstack-network-exporter/0.log" Jan 21 16:45:58 crc kubenswrapper[4902]: I0121 16:45:58.992771 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_aadc3978-ec1c-4d8d-8d02-f199d6509d5c/ovsdbserver-sb/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:58.999960 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_aadc3978-ec1c-4d8d-8d02-f199d6509d5c/openstack-network-exporter/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.097547 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-856775b9dd-twjxc_43a8c70b-ebc7-4ce0-8d5c-e790226eff45/placement-log/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.126628 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-856775b9dd-twjxc_43a8c70b-ebc7-4ce0-8d5c-e790226eff45/placement-api/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.261514 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c7pg7q_8b723bd7-4449-4516-bcc6-9d57d981fbda/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.282361 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/prometheus/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.296594 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/config-reloader/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.304570 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/thanos-sidecar/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.364650 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_294a561c-9181-4330-86e5-ab51e9f3c07c/init-config-reloader/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.394354 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e0bcf8cd-3dd9-409b-84d9-693f7e471fc1/rabbitmq/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.402618 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e0bcf8cd-3dd9-409b-84d9-693f7e471fc1/setup-container/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.438048 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f24aaa5-50e0-4e80-ba28-3fa2b770fac8/rabbitmq/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.448171 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7f24aaa5-50e0-4e80-ba28-3fa2b770fac8/setup-container/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.537513 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5866fbc874-ktwnr_4d3194a4-20d2-47cf-8d32-37a8afa5738d/proxy-httpd/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.549559 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5866fbc874-ktwnr_4d3194a4-20d2-47cf-8d32-37a8afa5738d/proxy-server/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.559661 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-mmsfz_4000cb23-899c-4f52-8c37-8e1c7108a21d/swift-ring-rebalance/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.632041 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-lvn8c_18a1d8a3-fcb5-408d-88ab-97d74bad0a8f/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 21 16:45:59 crc kubenswrapper[4902]: I0121 16:45:59.642037 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-5c9t8_ffce6892-25f4-48d1-b314-24d784fbc43f/validate-network-openstack-openstack-cell1/0.log" Jan 21 16:46:01 crc kubenswrapper[4902]: I0121 16:46:01.832834 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-4hbxl" event={"ID":"db88f391-e5de-44fc-8bb9-7d7b4bddd96d","Type":"ContainerStarted","Data":"31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669"} Jan 21 16:46:01 crc kubenswrapper[4902]: I0121 16:46:01.856842 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-649bt/crc-debug-4hbxl" podStartSLOduration=1.8305830429999999 podStartE2EDuration="14.856827322s" podCreationTimestamp="2026-01-21 16:45:47 +0000 UTC" firstStartedPulling="2026-01-21 16:45:48.193863256 +0000 UTC m=+7910.270696285" lastFinishedPulling="2026-01-21 16:46:01.220107535 +0000 UTC m=+7923.296940564" observedRunningTime="2026-01-21 16:46:01.851567213 +0000 UTC m=+7923.928400242" watchObservedRunningTime="2026-01-21 16:46:01.856827322 +0000 UTC m=+7923.933660351" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.204374 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/controller/0.log" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.211228 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/kube-rbac-proxy/0.log" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.228518 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-72rgj_4f8bf62b-aae0-4080-a5ee-2472a60fe41f/frr-k8s-webhook-server/0.log" Jan 21 16:46:15 crc kubenswrapper[4902]: I0121 16:46:15.251790 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/controller/0.log" Jan 21 16:46:17 crc kubenswrapper[4902]: I0121 16:46:17.769679 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:17 crc kubenswrapper[4902]: I0121 16:46:17.770404 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.252780 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.309400 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/reloader/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.407122 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr-metrics/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.501758 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.521700 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy-frr/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.527321 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-frr-files/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.534183 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-reloader/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.543194 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-metrics/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.573416 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c6bfc4dcb-mzr68_1ddec7fa-7afd-4d77-af77-509910e52c70/manager/0.log" Jan 21 16:46:18 crc kubenswrapper[4902]: I0121 16:46:18.583653 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79cc595b65-5xnzn_050f3d44-1ff2-4334-8fa8-c5124c7199d9/webhook-server/0.log" Jan 21 16:46:19 crc kubenswrapper[4902]: I0121 16:46:19.200523 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/speaker/0.log" Jan 21 16:46:19 crc kubenswrapper[4902]: I0121 16:46:19.211672 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/kube-rbac-proxy/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.609000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/extract/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.621530 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/util/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.629293 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/pull/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.773763 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-j6fwd_66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e/manager/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.839317 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-nh8zr_b924ea4f-71c9-4f42-aa0a-a4945ea589e3/manager/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.859554 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-sdkxs_bc4c2749-7073-4bb8-8c87-736187565b08/manager/0.log" Jan 21 16:46:32 crc kubenswrapper[4902]: I0121 16:46:32.998325 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gffs4_3c1e8b4d-a47d-4a6e-be63-bfc41d04d964/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.052909 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-lttm9_56c38bff-8549-485e-a91f-1d89d801a8ee/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.075085 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nqnfh_05001c4b-c8f0-46ea-bf02-d7537d8a373b/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.704699 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-46xm9_cea39ffd-421f-4b74-9f26-065f49e00786/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.728226 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-khcxt_f3f5f576-48b8-4175-8d70-d8de7e41a63a/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.843460 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-qwcvn_7d33c2a4-c369-4a5f-9592-289c162f095c/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.854158 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-x6xrb_a5d9aa95-7d14-4a6e-af38-dddad85007f4/manager/0.log" Jan 21 16:46:33 crc kubenswrapper[4902]: I0121 16:46:33.931145 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-xrlqr_01091192-af46-486f-8890-787505f3b41c/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.139878 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-8vfnj_0b55bf9c-cc65-446c-849e-035fb1bba4c4/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.276590 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-nql9r_b01862fd-dfad-4a73-ac90-5ef7823c06ea/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.328400 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-c2nb6_bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.342021 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dhp6x8_14dc1630-021a-4b05-8ac4-d99368b51726/manager/0.log" Jan 21 16:46:34 crc kubenswrapper[4902]: I0121 16:46:34.474137 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-mvcwp_1fbcd3da-0b42-4d83-b774-776f9d1612d5/operator/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.707801 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-hr66g_77e35131-84f1-4df7-b6de-ceda247df931/manager/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.821425 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dp8mf_2d05d6f5-a861-4117-b4a0-00e98da2fe57/registry-server/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.923829 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lljfd_3912b1da-b132-48da-9b67-1f4aeb2203c4/manager/0.log" Jan 21 16:46:36 crc kubenswrapper[4902]: I0121 16:46:36.973514 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-pmvgc_c5d64dc8-80f6-4076-9068-11ec25d524b5/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.006222 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s7vgs_1ffd452b-d331-4c80-a6f6-0b1b21d5fd84/operator/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.043559 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqmq2_1e685238-529c-4964-af9d-8abed4dfcfae/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.220635 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v7bj9_2ad74206-4131-4395-8392-9697c2c164eb/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.235448 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gn5kf_624ad6d5-5647-43c8-8e62-751e4c5989b3/manager/0.log" Jan 21 16:46:37 crc kubenswrapper[4902]: I0121 16:46:37.247733 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-s8g8n_6783daa1-082d-4ab7-be65-dc2fb211be6c/manager/0.log" Jan 21 16:46:41 crc kubenswrapper[4902]: I0121 16:46:41.507623 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qm6gk_9467c15f-f3fe-4594-b97d-0838d43877d1/control-plane-machine-set-operator/0.log" Jan 21 16:46:41 crc kubenswrapper[4902]: I0121 16:46:41.530373 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/kube-rbac-proxy/0.log" Jan 21 16:46:41 crc kubenswrapper[4902]: I0121 16:46:41.549929 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/machine-api-operator/0.log" Jan 21 16:46:43 crc kubenswrapper[4902]: I0121 16:46:43.420060 4902 generic.go:334] "Generic (PLEG): container finished" podID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerID="31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669" exitCode=0 Jan 21 16:46:43 crc kubenswrapper[4902]: I0121 16:46:43.420377 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-4hbxl" event={"ID":"db88f391-e5de-44fc-8bb9-7d7b4bddd96d","Type":"ContainerDied","Data":"31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669"} Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.582017 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.614279 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-649bt/crc-debug-4hbxl"] Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.623337 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-649bt/crc-debug-4hbxl"] Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.774848 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") pod \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.774910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") pod \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\" (UID: \"db88f391-e5de-44fc-8bb9-7d7b4bddd96d\") " Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.777183 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host" (OuterVolumeSpecName: "host") pod "db88f391-e5de-44fc-8bb9-7d7b4bddd96d" (UID: "db88f391-e5de-44fc-8bb9-7d7b4bddd96d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.781364 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5" (OuterVolumeSpecName: "kube-api-access-b8lt5") pod "db88f391-e5de-44fc-8bb9-7d7b4bddd96d" (UID: "db88f391-e5de-44fc-8bb9-7d7b4bddd96d"). InnerVolumeSpecName "kube-api-access-b8lt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.877584 4902 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:44 crc kubenswrapper[4902]: I0121 16:46:44.877621 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8lt5\" (UniqueName: \"kubernetes.io/projected/db88f391-e5de-44fc-8bb9-7d7b4bddd96d-kube-api-access-b8lt5\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.438263 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7c7697f0980d691d3313b527a6369dc55eb3e3c975f66aca6b883c28023fe16" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.438324 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-4hbxl" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.813533 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/crc-debug-pv9xv"] Jan 21 16:46:45 crc kubenswrapper[4902]: E0121 16:46:45.814091 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerName="container-00" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.814108 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerName="container-00" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.814386 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" containerName="container-00" Jan 21 16:46:45 crc kubenswrapper[4902]: I0121 16:46:45.815457 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.001149 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.001237 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.103417 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.103504 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.103750 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.125487 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"crc-debug-pv9xv\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.136713 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.315661 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db88f391-e5de-44fc-8bb9-7d7b4bddd96d" path="/var/lib/kubelet/pods/db88f391-e5de-44fc-8bb9-7d7b4bddd96d/volumes" Jan 21 16:46:46 crc kubenswrapper[4902]: I0121 16:46:46.456579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-pv9xv" event={"ID":"3fe6195f-4ceb-42d9-b303-f1e722166c5e","Type":"ContainerStarted","Data":"107d08e5a61bbb816ede570e57aabc9602def3d2777c3d991d3591f77d98535a"} Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.468655 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-pv9xv" event={"ID":"3fe6195f-4ceb-42d9-b303-f1e722166c5e","Type":"ContainerDied","Data":"899b0d27b49bb0fa1d47ffc0ca83404011feda128ea60d1928b8e625e3893f24"} Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.469093 4902 generic.go:334] "Generic (PLEG): container finished" podID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerID="899b0d27b49bb0fa1d47ffc0ca83404011feda128ea60d1928b8e625e3893f24" exitCode=0 Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.769894 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.769941 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.769982 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.770798 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.770857 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030" gracePeriod=600 Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.914746 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-649bt/crc-debug-pv9xv"] Jan 21 16:46:47 crc kubenswrapper[4902]: I0121 16:46:47.923369 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-649bt/crc-debug-pv9xv"] Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481252 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030" exitCode=0 Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481314 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030"} Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481662 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc"} Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.481682 4902 scope.go:117] "RemoveContainer" containerID="10434db9b2d4ccbafe90e0a6b715d5da8f9734bd3ba91f776a3c95ef2b72e53d" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.602589 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.675793 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") pod \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.675918 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host" (OuterVolumeSpecName: "host") pod "3fe6195f-4ceb-42d9-b303-f1e722166c5e" (UID: "3fe6195f-4ceb-42d9-b303-f1e722166c5e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.676438 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") pod \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\" (UID: \"3fe6195f-4ceb-42d9-b303-f1e722166c5e\") " Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.677189 4902 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fe6195f-4ceb-42d9-b303-f1e722166c5e-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.682459 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6" (OuterVolumeSpecName: "kube-api-access-sclh6") pod "3fe6195f-4ceb-42d9-b303-f1e722166c5e" (UID: "3fe6195f-4ceb-42d9-b303-f1e722166c5e"). InnerVolumeSpecName "kube-api-access-sclh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:48 crc kubenswrapper[4902]: I0121 16:46:48.778568 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sclh6\" (UniqueName: \"kubernetes.io/projected/3fe6195f-4ceb-42d9-b303-f1e722166c5e-kube-api-access-sclh6\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.093451 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-649bt/crc-debug-w5ctj"] Jan 21 16:46:49 crc kubenswrapper[4902]: E0121 16:46:49.094459 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerName="container-00" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.094483 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerName="container-00" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.095167 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" containerName="container-00" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.096620 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.211448 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.211530 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.314133 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.314388 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.314557 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.336222 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"crc-debug-w5ctj\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.424369 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:49 crc kubenswrapper[4902]: W0121 16:46:49.465910 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97270538_b46a_4318_9a2b_11eec116c8d3.slice/crio-b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483 WatchSource:0}: Error finding container b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483: Status 404 returned error can't find the container with id b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483 Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.495783 4902 scope.go:117] "RemoveContainer" containerID="899b0d27b49bb0fa1d47ffc0ca83404011feda128ea60d1928b8e625e3893f24" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.495812 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-pv9xv" Jan 21 16:46:49 crc kubenswrapper[4902]: I0121 16:46:49.497524 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-w5ctj" event={"ID":"97270538-b46a-4318-9a2b-11eec116c8d3","Type":"ContainerStarted","Data":"b234558abb96a2a20989651bc81c72e57ae981e453627beb3d433dbd2d223483"} Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.307266 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe6195f-4ceb-42d9-b303-f1e722166c5e" path="/var/lib/kubelet/pods/3fe6195f-4ceb-42d9-b303-f1e722166c5e/volumes" Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.517012 4902 generic.go:334] "Generic (PLEG): container finished" podID="97270538-b46a-4318-9a2b-11eec116c8d3" containerID="1a03d23c5092a33fc8a964770fef978491b258293bf35aedcd8a382acf457933" exitCode=0 Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.517088 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-649bt/crc-debug-w5ctj" event={"ID":"97270538-b46a-4318-9a2b-11eec116c8d3","Type":"ContainerDied","Data":"1a03d23c5092a33fc8a964770fef978491b258293bf35aedcd8a382acf457933"} Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.556873 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-649bt/crc-debug-w5ctj"] Jan 21 16:46:50 crc kubenswrapper[4902]: I0121 16:46:50.568618 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-649bt/crc-debug-w5ctj"] Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.679960 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.765746 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") pod \"97270538-b46a-4318-9a2b-11eec116c8d3\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.765827 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") pod \"97270538-b46a-4318-9a2b-11eec116c8d3\" (UID: \"97270538-b46a-4318-9a2b-11eec116c8d3\") " Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.766009 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host" (OuterVolumeSpecName: "host") pod "97270538-b46a-4318-9a2b-11eec116c8d3" (UID: "97270538-b46a-4318-9a2b-11eec116c8d3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.766586 4902 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97270538-b46a-4318-9a2b-11eec116c8d3-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.772256 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5" (OuterVolumeSpecName: "kube-api-access-44mr5") pod "97270538-b46a-4318-9a2b-11eec116c8d3" (UID: "97270538-b46a-4318-9a2b-11eec116c8d3"). InnerVolumeSpecName "kube-api-access-44mr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:51 crc kubenswrapper[4902]: I0121 16:46:51.868373 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44mr5\" (UniqueName: \"kubernetes.io/projected/97270538-b46a-4318-9a2b-11eec116c8d3-kube-api-access-44mr5\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:52 crc kubenswrapper[4902]: I0121 16:46:52.310451 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" path="/var/lib/kubelet/pods/97270538-b46a-4318-9a2b-11eec116c8d3/volumes" Jan 21 16:46:52 crc kubenswrapper[4902]: I0121 16:46:52.539617 4902 scope.go:117] "RemoveContainer" containerID="1a03d23c5092a33fc8a964770fef978491b258293bf35aedcd8a382acf457933" Jan 21 16:46:52 crc kubenswrapper[4902]: I0121 16:46:52.539647 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-649bt/crc-debug-w5ctj" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.795915 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:10 crc kubenswrapper[4902]: E0121 16:48:10.797077 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" containerName="container-00" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.797093 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" containerName="container-00" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.797378 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="97270538-b46a-4318-9a2b-11eec116c8d3" containerName="container-00" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.799436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.813162 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.945668 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.945877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:10 crc kubenswrapper[4902]: I0121 16:48:10.946124 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.048624 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.048738 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.048810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.049256 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.049381 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.074018 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"redhat-marketplace-r4kj6\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.120361 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:11 crc kubenswrapper[4902]: I0121 16:48:11.599305 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:12 crc kubenswrapper[4902]: I0121 16:48:12.436342 4902 generic.go:334] "Generic (PLEG): container finished" podID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" exitCode=0 Jan 21 16:48:12 crc kubenswrapper[4902]: I0121 16:48:12.436424 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54"} Jan 21 16:48:12 crc kubenswrapper[4902]: I0121 16:48:12.436639 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerStarted","Data":"9800e26045e04895284317f27a8063b63ad4f1c304ba26cb489d923200abe7c6"} Jan 21 16:48:13 crc kubenswrapper[4902]: I0121 16:48:13.446604 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerStarted","Data":"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900"} Jan 21 16:48:14 crc kubenswrapper[4902]: I0121 16:48:14.457635 4902 generic.go:334] "Generic (PLEG): container finished" podID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" exitCode=0 Jan 21 16:48:14 crc kubenswrapper[4902]: I0121 16:48:14.458154 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900"} Jan 21 16:48:15 crc kubenswrapper[4902]: I0121 16:48:15.471186 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerStarted","Data":"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96"} Jan 21 16:48:15 crc kubenswrapper[4902]: I0121 16:48:15.506668 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4kj6" podStartSLOduration=3.025205395 podStartE2EDuration="5.50665024s" podCreationTimestamp="2026-01-21 16:48:10 +0000 UTC" firstStartedPulling="2026-01-21 16:48:12.438554985 +0000 UTC m=+8054.515388024" lastFinishedPulling="2026-01-21 16:48:14.91999984 +0000 UTC m=+8056.996832869" observedRunningTime="2026-01-21 16:48:15.49499463 +0000 UTC m=+8057.571827689" watchObservedRunningTime="2026-01-21 16:48:15.50665024 +0000 UTC m=+8057.583483269" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.120558 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.120976 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.166493 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.577895 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:21 crc kubenswrapper[4902]: I0121 16:48:21.621341 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:23 crc kubenswrapper[4902]: I0121 16:48:23.554361 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4kj6" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" containerID="cri-o://ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" gracePeriod=2 Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.040513 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.141130 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") pod \"fae9ccc8-6a44-4a51-9379-4a9df0699618\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.141302 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") pod \"fae9ccc8-6a44-4a51-9379-4a9df0699618\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.141585 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") pod \"fae9ccc8-6a44-4a51-9379-4a9df0699618\" (UID: \"fae9ccc8-6a44-4a51-9379-4a9df0699618\") " Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.146350 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities" (OuterVolumeSpecName: "utilities") pod "fae9ccc8-6a44-4a51-9379-4a9df0699618" (UID: "fae9ccc8-6a44-4a51-9379-4a9df0699618"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.150855 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m" (OuterVolumeSpecName: "kube-api-access-vkg8m") pod "fae9ccc8-6a44-4a51-9379-4a9df0699618" (UID: "fae9ccc8-6a44-4a51-9379-4a9df0699618"). InnerVolumeSpecName "kube-api-access-vkg8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.184920 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fae9ccc8-6a44-4a51-9379-4a9df0699618" (UID: "fae9ccc8-6a44-4a51-9379-4a9df0699618"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.248589 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.248633 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae9ccc8-6a44-4a51-9379-4a9df0699618-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.248654 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkg8m\" (UniqueName: \"kubernetes.io/projected/fae9ccc8-6a44-4a51-9379-4a9df0699618-kube-api-access-vkg8m\") on node \"crc\" DevicePath \"\"" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.565517 4902 generic.go:334] "Generic (PLEG): container finished" podID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" exitCode=0 Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.565586 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4kj6" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.565611 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96"} Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.567065 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4kj6" event={"ID":"fae9ccc8-6a44-4a51-9379-4a9df0699618","Type":"ContainerDied","Data":"9800e26045e04895284317f27a8063b63ad4f1c304ba26cb489d923200abe7c6"} Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.567102 4902 scope.go:117] "RemoveContainer" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.595825 4902 scope.go:117] "RemoveContainer" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.598177 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.607516 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4kj6"] Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.613448 4902 scope.go:117] "RemoveContainer" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.671603 4902 scope.go:117] "RemoveContainer" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" Jan 21 16:48:24 crc kubenswrapper[4902]: E0121 16:48:24.672126 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96\": container with ID starting with ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96 not found: ID does not exist" containerID="ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672169 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96"} err="failed to get container status \"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96\": rpc error: code = NotFound desc = could not find container \"ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96\": container with ID starting with ae36bcd23667a7879e631ba585a4ec3bdb1f59cce64b762020f5e7138ae16f96 not found: ID does not exist" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672200 4902 scope.go:117] "RemoveContainer" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" Jan 21 16:48:24 crc kubenswrapper[4902]: E0121 16:48:24.672500 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900\": container with ID starting with 35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900 not found: ID does not exist" containerID="35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672536 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900"} err="failed to get container status \"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900\": rpc error: code = NotFound desc = could not find container \"35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900\": container with ID starting with 35f49c9eae975e2cd80dd2ddfb13bcfbdf508aa9edfdc7671c7e7ad98c86f900 not found: ID does not exist" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672562 4902 scope.go:117] "RemoveContainer" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" Jan 21 16:48:24 crc kubenswrapper[4902]: E0121 16:48:24.672766 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54\": container with ID starting with d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54 not found: ID does not exist" containerID="d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54" Jan 21 16:48:24 crc kubenswrapper[4902]: I0121 16:48:24.672788 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54"} err="failed to get container status \"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54\": rpc error: code = NotFound desc = could not find container \"d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54\": container with ID starting with d04813b0dfa34448d4bc479347c907437209745ae71579bdf28067674701bc54 not found: ID does not exist" Jan 21 16:48:26 crc kubenswrapper[4902]: I0121 16:48:26.501627 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" path="/var/lib/kubelet/pods/fae9ccc8-6a44-4a51-9379-4a9df0699618/volumes" Jan 21 16:48:28 crc kubenswrapper[4902]: I0121 16:48:28.052152 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4cd6m_12dae6d4-a2b1-4ef8-ae74-369697c9172b/cert-manager-controller/0.log" Jan 21 16:48:28 crc kubenswrapper[4902]: I0121 16:48:28.073958 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-llf68_21799993-1de7-4aef-9cfa-c132249ecf74/cert-manager-cainjector/0.log" Jan 21 16:48:28 crc kubenswrapper[4902]: I0121 16:48:28.083390 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p2522_9093daac-4fd2-4075-8e73-d358cd885c3c/cert-manager-webhook/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.268292 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-6vz5c_ce3bf701-2498-42d7-969d-8944df02f1c7/nmstate-console-plugin/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.306503 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p9t9n_14dd02e5-8cb3-4382-9107-5f5b698a2701/nmstate-handler/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.316729 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/nmstate-metrics/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.332275 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/kube-rbac-proxy/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.342774 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q2fs2_bb74694a-8b82-4c31-85da-4ba2c732bbb8/nmstate-operator/0.log" Jan 21 16:48:34 crc kubenswrapper[4902]: I0121 16:48:34.362527 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-88bkr_87768889-c41f-4563-8b38-3d939fa22303/nmstate-webhook/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.514401 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tw4cr_5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5/prometheus-operator/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.525124 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-csqks_c014cd52-9da2-4fa7-96b6-0a400835f56e/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.535279 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-l469x_dce978e0-318d-4086-8594-08da83f1fe23/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.575763 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6xc5d_cdfe14cf-a2d6-4df7-92b5-c4146bdab44d/operator/0.log" Jan 21 16:48:40 crc kubenswrapper[4902]: I0121 16:48:40.589206 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k6f6k_ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a/perses-operator/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.882297 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/controller/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.889806 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/kube-rbac-proxy/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.904519 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-72rgj_4f8bf62b-aae0-4080-a5ee-2472a60fe41f/frr-k8s-webhook-server/0.log" Jan 21 16:48:46 crc kubenswrapper[4902]: I0121 16:48:46.927759 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/controller/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.642940 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.653458 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/reloader/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.658321 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr-metrics/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.667411 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.676008 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy-frr/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.681957 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-frr-files/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.689079 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-reloader/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.694321 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-metrics/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.724679 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c6bfc4dcb-mzr68_1ddec7fa-7afd-4d77-af77-509910e52c70/manager/0.log" Jan 21 16:48:49 crc kubenswrapper[4902]: I0121 16:48:49.741385 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79cc595b65-5xnzn_050f3d44-1ff2-4334-8fa8-c5124c7199d9/webhook-server/0.log" Jan 21 16:48:50 crc kubenswrapper[4902]: I0121 16:48:50.359823 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/speaker/0.log" Jan 21 16:48:50 crc kubenswrapper[4902]: I0121 16:48:50.520938 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/kube-rbac-proxy/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.132032 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj_b4942197-db6e-4bb6-af6d-24694a007a0b/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.145416 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj_b4942197-db6e-4bb6-af6d-24694a007a0b/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.171721 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a2kjnj_b4942197-db6e-4bb6-af6d-24694a007a0b/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.181949 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw_5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.190335 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw_5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.200501 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcwxkqw_5c2c0c1f-f8a5-4001-9c9a-0109aa7fa7ea/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.214550 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr_91ab62d2-e4b6-44ce-afc8-292ac5685c46/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.225434 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr_91ab62d2-e4b6-44ce-afc8-292ac5685c46/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.236297 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135nxgr_91ab62d2-e4b6-44ce-afc8-292ac5685c46/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.253187 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn_052d7e2b-1135-41ae-8c3e-a750c22fce27/extract/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.262598 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn_052d7e2b-1135-41ae-8c3e-a750c22fce27/util/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.272133 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086vrzn_052d7e2b-1135-41ae-8c3e-a750c22fce27/pull/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.933910 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mklsf_2ec2690b-73b2-45db-b14b-355b80ab92a6/registry-server/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.943376 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mklsf_2ec2690b-73b2-45db-b14b-355b80ab92a6/extract-utilities/0.log" Jan 21 16:48:53 crc kubenswrapper[4902]: I0121 16:48:53.950057 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mklsf_2ec2690b-73b2-45db-b14b-355b80ab92a6/extract-content/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.848522 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsh4_e5fe57c1-6b56-4abe-8067-dd74165e5937/registry-server/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.853354 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsh4_e5fe57c1-6b56-4abe-8067-dd74165e5937/extract-utilities/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.864897 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjsh4_e5fe57c1-6b56-4abe-8067-dd74165e5937/extract-content/0.log" Jan 21 16:48:54 crc kubenswrapper[4902]: I0121 16:48:54.893865 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-z4vkp_021a0823-715d-4b67-b5b2-b52ec6d6c7e8/marketplace-operator/0.log" Jan 21 16:48:55 crc kubenswrapper[4902]: I0121 16:48:55.223962 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ppndl_663aee99-c55e-45ba-b5ff-a67def0f524e/registry-server/0.log" Jan 21 16:48:55 crc kubenswrapper[4902]: I0121 16:48:55.229873 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ppndl_663aee99-c55e-45ba-b5ff-a67def0f524e/extract-utilities/0.log" Jan 21 16:48:55 crc kubenswrapper[4902]: I0121 16:48:55.236333 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ppndl_663aee99-c55e-45ba-b5ff-a67def0f524e/extract-content/0.log" Jan 21 16:48:56 crc kubenswrapper[4902]: I0121 16:48:56.278106 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kplb_fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c/registry-server/0.log" Jan 21 16:48:56 crc kubenswrapper[4902]: I0121 16:48:56.283211 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kplb_fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c/extract-utilities/0.log" Jan 21 16:48:56 crc kubenswrapper[4902]: I0121 16:48:56.290526 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kplb_fddf72a9-e04a-41e1-9f81-f41a8d7b8d9c/extract-content/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.433707 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tw4cr_5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5/prometheus-operator/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.446271 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-csqks_c014cd52-9da2-4fa7-96b6-0a400835f56e/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.459167 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-l469x_dce978e0-318d-4086-8594-08da83f1fe23/prometheus-operator-admission-webhook/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.478572 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6xc5d_cdfe14cf-a2d6-4df7-92b5-c4146bdab44d/operator/0.log" Jan 21 16:48:59 crc kubenswrapper[4902]: I0121 16:48:59.487893 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k6f6k_ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a/perses-operator/0.log" Jan 21 16:49:17 crc kubenswrapper[4902]: I0121 16:49:17.769601 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:49:17 crc kubenswrapper[4902]: I0121 16:49:17.770219 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:49:47 crc kubenswrapper[4902]: I0121 16:49:47.770134 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:49:47 crc kubenswrapper[4902]: I0121 16:49:47.770629 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.770128 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.770551 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.770592 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.771435 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:50:17 crc kubenswrapper[4902]: I0121 16:50:17.771483 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" gracePeriod=600 Jan 21 16:50:18 crc kubenswrapper[4902]: E0121 16:50:18.408900 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.659902 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" exitCode=0 Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.659947 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc"} Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.659981 4902 scope.go:117] "RemoveContainer" containerID="46dab60a77a31c9c125a7eb039a17b28b44898970f8705055f9ff1b6d0fef030" Jan 21 16:50:18 crc kubenswrapper[4902]: I0121 16:50:18.660915 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:18 crc kubenswrapper[4902]: E0121 16:50:18.661289 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:30 crc kubenswrapper[4902]: I0121 16:50:30.295233 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:30 crc kubenswrapper[4902]: E0121 16:50:30.296018 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:43 crc kubenswrapper[4902]: I0121 16:50:43.295467 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:43 crc kubenswrapper[4902]: E0121 16:50:43.296182 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.348340 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-tw4cr_5bef9b7b-7b8b-4a3b-82ca-cc12bfa8d7a5/prometheus-operator/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.360367 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-csqks_c014cd52-9da2-4fa7-96b6-0a400835f56e/prometheus-operator-admission-webhook/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.371389 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cb6669c59-l469x_dce978e0-318d-4086-8594-08da83f1fe23/prometheus-operator-admission-webhook/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.397169 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6xc5d_cdfe14cf-a2d6-4df7-92b5-c4146bdab44d/operator/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.405058 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k6f6k_ea8d550d-3cd6-4d90-9209-f11bbf7d4e3a/perses-operator/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.604171 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4cd6m_12dae6d4-a2b1-4ef8-ae74-369697c9172b/cert-manager-controller/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.637952 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-llf68_21799993-1de7-4aef-9cfa-c132249ecf74/cert-manager-cainjector/0.log" Jan 21 16:50:45 crc kubenswrapper[4902]: I0121 16:50:45.650984 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p2522_9093daac-4fd2-4075-8e73-d358cd885c3c/cert-manager-webhook/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.537772 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/extract/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.552575 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/util/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.560845 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/pull/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.721715 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-j6fwd_66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e/manager/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.849528 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-nh8zr_b924ea4f-71c9-4f42-aa0a-a4945ea589e3/manager/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.865081 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-sdkxs_bc4c2749-7073-4bb8-8c87-736187565b08/manager/0.log" Jan 21 16:50:46 crc kubenswrapper[4902]: I0121 16:50:46.995468 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/controller/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.005567 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-h2pgt_694bf42b-c612-44c2-964b-c91336b8afa1/kube-rbac-proxy/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.017723 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-72rgj_4f8bf62b-aae0-4080-a5ee-2472a60fe41f/frr-k8s-webhook-server/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.046499 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gffs4_3c1e8b4d-a47d-4a6e-be63-bfc41d04d964/manager/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.053875 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/controller/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.113644 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-lttm9_56c38bff-8549-485e-a91f-1d89d801a8ee/manager/0.log" Jan 21 16:50:47 crc kubenswrapper[4902]: I0121 16:50:47.138200 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nqnfh_05001c4b-c8f0-46ea-bf02-d7537d8a373b/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.009589 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-46xm9_cea39ffd-421f-4b74-9f26-065f49e00786/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.027489 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-khcxt_f3f5f576-48b8-4175-8d70-d8de7e41a63a/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.232136 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-qwcvn_7d33c2a4-c369-4a5f-9592-289c162f095c/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.243114 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-x6xrb_a5d9aa95-7d14-4a6e-af38-dddad85007f4/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.323955 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-xrlqr_01091192-af46-486f-8890-787505f3b41c/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.423809 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-8vfnj_0b55bf9c-cc65-446c-849e-035fb1bba4c4/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.643157 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-nql9r_b01862fd-dfad-4a73-ac90-5ef7823c06ea/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.702531 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-c2nb6_bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.727318 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dhp6x8_14dc1630-021a-4b05-8ac4-d99368b51726/manager/0.log" Jan 21 16:50:48 crc kubenswrapper[4902]: I0121 16:50:48.953575 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-mvcwp_1fbcd3da-0b42-4d83-b774-776f9d1612d5/operator/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.897991 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.927393 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/reloader/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.931886 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/frr-metrics/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.939932 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.949187 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/kube-rbac-proxy-frr/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.956430 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-frr-files/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.957886 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-hr66g_77e35131-84f1-4df7-b6de-ceda247df931/manager/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.966445 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-reloader/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.973096 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xpzj8_6fc6639b-9150-4158-836f-1ffc1c4f5339/cp-metrics/0.log" Jan 21 16:50:51 crc kubenswrapper[4902]: I0121 16:50:51.999756 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c6bfc4dcb-mzr68_1ddec7fa-7afd-4d77-af77-509910e52c70/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.008470 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79cc595b65-5xnzn_050f3d44-1ff2-4334-8fa8-c5124c7199d9/webhook-server/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.143536 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dp8mf_2d05d6f5-a861-4117-b4a0-00e98da2fe57/registry-server/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.284692 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lljfd_3912b1da-b132-48da-9b67-1f4aeb2203c4/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.329758 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-pmvgc_c5d64dc8-80f6-4076-9068-11ec25d524b5/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.378097 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s7vgs_1ffd452b-d331-4c80-a6f6-0b1b21d5fd84/operator/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.436949 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqmq2_1e685238-529c-4964-af9d-8abed4dfcfae/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.700476 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v7bj9_2ad74206-4131-4395-8392-9697c2c164eb/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.714763 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gn5kf_624ad6d5-5647-43c8-8e62-751e4c5989b3/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.724940 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-s8g8n_6783daa1-082d-4ab7-be65-dc2fb211be6c/manager/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.932288 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/speaker/0.log" Jan 21 16:50:52 crc kubenswrapper[4902]: I0121 16:50:52.941750 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5m6ct_4fbfffc0-8fac-4684-9cc8-2a3bcc3cb501/kube-rbac-proxy/0.log" Jan 21 16:50:53 crc kubenswrapper[4902]: I0121 16:50:53.776331 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4cd6m_12dae6d4-a2b1-4ef8-ae74-369697c9172b/cert-manager-controller/0.log" Jan 21 16:50:53 crc kubenswrapper[4902]: I0121 16:50:53.799585 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-llf68_21799993-1de7-4aef-9cfa-c132249ecf74/cert-manager-cainjector/0.log" Jan 21 16:50:53 crc kubenswrapper[4902]: I0121 16:50:53.817004 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-p2522_9093daac-4fd2-4075-8e73-d358cd885c3c/cert-manager-webhook/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.201675 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-6vz5c_ce3bf701-2498-42d7-969d-8944df02f1c7/nmstate-console-plugin/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.221132 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p9t9n_14dd02e5-8cb3-4382-9107-5f5b698a2701/nmstate-handler/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.232402 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/nmstate-metrics/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.240882 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-x6qnj_d406f136-7416-4694-b6cd-d6bdf6b60e1f/kube-rbac-proxy/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.256652 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q2fs2_bb74694a-8b82-4c31-85da-4ba2c732bbb8/nmstate-operator/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.267945 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-88bkr_87768889-c41f-4563-8b38-3d939fa22303/nmstate-webhook/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.466608 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qm6gk_9467c15f-f3fe-4594-b97d-0838d43877d1/control-plane-machine-set-operator/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.481108 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/kube-rbac-proxy/0.log" Jan 21 16:50:54 crc kubenswrapper[4902]: I0121 16:50:54.490342 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-57jmg_91a268d0-59c0-4e7f-8b78-260d14051e34/machine-api-operator/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.102651 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/extract/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.109748 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/util/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.117649 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2eqlgtv_f7119ded-6a7d-468d-acc4-9d1d1045656c/pull/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.252920 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-j6fwd_66bb9ed9-5aee-41c4-a7d0-4b2ff5cff91e/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.330680 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-nh8zr_b924ea4f-71c9-4f42-aa0a-a4945ea589e3/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.341591 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-sdkxs_bc4c2749-7073-4bb8-8c87-736187565b08/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.573820 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gffs4_3c1e8b4d-a47d-4a6e-be63-bfc41d04d964/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.635727 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-lttm9_56c38bff-8549-485e-a91f-1d89d801a8ee/manager/0.log" Jan 21 16:50:55 crc kubenswrapper[4902]: I0121 16:50:55.655513 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-nqnfh_05001c4b-c8f0-46ea-bf02-d7537d8a373b/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.295810 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:50:56 crc kubenswrapper[4902]: E0121 16:50:56.296186 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.434355 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-46xm9_cea39ffd-421f-4b74-9f26-065f49e00786/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.446832 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-khcxt_f3f5f576-48b8-4175-8d70-d8de7e41a63a/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.617604 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-qwcvn_7d33c2a4-c369-4a5f-9592-289c162f095c/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.632161 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-x6xrb_a5d9aa95-7d14-4a6e-af38-dddad85007f4/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.720660 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-xrlqr_01091192-af46-486f-8890-787505f3b41c/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.804018 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-8vfnj_0b55bf9c-cc65-446c-849e-035fb1bba4c4/manager/0.log" Jan 21 16:50:56 crc kubenswrapper[4902]: I0121 16:50:56.975322 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-nql9r_b01862fd-dfad-4a73-ac90-5ef7823c06ea/manager/0.log" Jan 21 16:50:57 crc kubenswrapper[4902]: I0121 16:50:57.030674 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-c2nb6_bc7bedc3-7b23-4f5c-bfbb-7b05694e6b90/manager/0.log" Jan 21 16:50:57 crc kubenswrapper[4902]: I0121 16:50:57.051875 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986dhp6x8_14dc1630-021a-4b05-8ac4-d99368b51726/manager/0.log" Jan 21 16:50:57 crc kubenswrapper[4902]: I0121 16:50:57.225021 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-mvcwp_1fbcd3da-0b42-4d83-b774-776f9d1612d5/operator/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.581512 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-hr66g_77e35131-84f1-4df7-b6de-ceda247df931/manager/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.719133 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dp8mf_2d05d6f5-a861-4117-b4a0-00e98da2fe57/registry-server/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.810492 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-lljfd_3912b1da-b132-48da-9b67-1f4aeb2203c4/manager/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.858473 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-pmvgc_c5d64dc8-80f6-4076-9068-11ec25d524b5/manager/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.887999 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s7vgs_1ffd452b-d331-4c80-a6f6-0b1b21d5fd84/operator/0.log" Jan 21 16:50:59 crc kubenswrapper[4902]: I0121 16:50:59.930630 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-wqmq2_1e685238-529c-4964-af9d-8abed4dfcfae/manager/0.log" Jan 21 16:51:00 crc kubenswrapper[4902]: I0121 16:51:00.126558 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-v7bj9_2ad74206-4131-4395-8392-9697c2c164eb/manager/0.log" Jan 21 16:51:00 crc kubenswrapper[4902]: I0121 16:51:00.135435 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-gn5kf_624ad6d5-5647-43c8-8e62-751e4c5989b3/manager/0.log" Jan 21 16:51:00 crc kubenswrapper[4902]: I0121 16:51:00.146328 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-s8g8n_6783daa1-082d-4ab7-be65-dc2fb211be6c/manager/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.827221 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/kube-multus-additional-cni-plugins/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.835447 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/egress-router-binary-copy/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.844350 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/cni-plugins/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.851000 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/bond-cni-plugin/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.860447 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/routeoverride-cni/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.873828 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/whereabouts-cni-bincopy/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.880954 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-h68nf_7dbee8a9-6952-46b5-a958-ff8f1847fabd/whereabouts-cni/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.929718 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-q69sb_4c2958e3-5395-4efd-8b8f-f3e70fd9fcea/multus-admission-controller/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.937127 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-q69sb_4c2958e3-5395-4efd-8b8f-f3e70fd9fcea/kube-rbac-proxy/0.log" Jan 21 16:51:01 crc kubenswrapper[4902]: I0121 16:51:01.986675 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/2.log" Jan 21 16:51:02 crc kubenswrapper[4902]: I0121 16:51:02.199195 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mztd6_037b55cf-cb9e-41ce-8b1e-3898f490a4aa/kube-multus/3.log" Jan 21 16:51:02 crc kubenswrapper[4902]: I0121 16:51:02.253551 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kq588_05d94e6a-249a-484c-8895-085e81f1dfaa/network-metrics-daemon/0.log" Jan 21 16:51:02 crc kubenswrapper[4902]: I0121 16:51:02.261154 4902 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kq588_05d94e6a-249a-484c-8895-085e81f1dfaa/kube-rbac-proxy/0.log" Jan 21 16:51:08 crc kubenswrapper[4902]: I0121 16:51:08.305657 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:08 crc kubenswrapper[4902]: E0121 16:51:08.306590 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.805674 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:12 crc kubenswrapper[4902]: E0121 16:51:12.807269 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-utilities" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.807302 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-utilities" Jan 21 16:51:12 crc kubenswrapper[4902]: E0121 16:51:12.807393 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-content" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.807412 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="extract-content" Jan 21 16:51:12 crc kubenswrapper[4902]: E0121 16:51:12.807447 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.807464 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.808004 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae9ccc8-6a44-4a51-9379-4a9df0699618" containerName="registry-server" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.829279 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.829502 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.969490 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.969977 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:12 crc kubenswrapper[4902]: I0121 16:51:12.970028 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.071584 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.071735 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.071773 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.072451 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.072442 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.089828 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"community-operators-7wg99\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.159602 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:13 crc kubenswrapper[4902]: I0121 16:51:13.814066 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.215346 4902 generic.go:334] "Generic (PLEG): container finished" podID="143242b4-3aff-4e7a-9b24-30668b357d16" containerID="39fb987f6fc04b8891dfae5f80b4bcb45c7d727cc3da63ac2ce75a0959709d9c" exitCode=0 Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.215436 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"39fb987f6fc04b8891dfae5f80b4bcb45c7d727cc3da63ac2ce75a0959709d9c"} Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.215746 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerStarted","Data":"bf23ec466aed83cc1d86a8222ea846f4133c82d2cb505fed79112940bbd33e10"} Jan 21 16:51:14 crc kubenswrapper[4902]: I0121 16:51:14.219487 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:51:15 crc kubenswrapper[4902]: I0121 16:51:15.233267 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerStarted","Data":"b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f"} Jan 21 16:51:16 crc kubenswrapper[4902]: E0121 16:51:16.076369 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143242b4_3aff_4e7a_9b24_30668b357d16.slice/crio-conmon-b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143242b4_3aff_4e7a_9b24_30668b357d16.slice/crio-b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:51:16 crc kubenswrapper[4902]: I0121 16:51:16.243309 4902 generic.go:334] "Generic (PLEG): container finished" podID="143242b4-3aff-4e7a-9b24-30668b357d16" containerID="b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f" exitCode=0 Jan 21 16:51:16 crc kubenswrapper[4902]: I0121 16:51:16.243358 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f"} Jan 21 16:51:17 crc kubenswrapper[4902]: I0121 16:51:17.260509 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerStarted","Data":"9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1"} Jan 21 16:51:17 crc kubenswrapper[4902]: I0121 16:51:17.292223 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wg99" podStartSLOduration=2.837766131 podStartE2EDuration="5.292203241s" podCreationTimestamp="2026-01-21 16:51:12 +0000 UTC" firstStartedPulling="2026-01-21 16:51:14.219107239 +0000 UTC m=+8236.295940278" lastFinishedPulling="2026-01-21 16:51:16.673544359 +0000 UTC m=+8238.750377388" observedRunningTime="2026-01-21 16:51:17.287391886 +0000 UTC m=+8239.364224915" watchObservedRunningTime="2026-01-21 16:51:17.292203241 +0000 UTC m=+8239.369036270" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.159905 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.160522 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.207796 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.295120 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:23 crc kubenswrapper[4902]: E0121 16:51:23.295633 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.396129 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:23 crc kubenswrapper[4902]: I0121 16:51:23.446490 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:25 crc kubenswrapper[4902]: I0121 16:51:25.349437 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wg99" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" containerID="cri-o://9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1" gracePeriod=2 Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.393817 4902 generic.go:334] "Generic (PLEG): container finished" podID="143242b4-3aff-4e7a-9b24-30668b357d16" containerID="9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1" exitCode=0 Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.394163 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1"} Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.509846 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.645987 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") pod \"143242b4-3aff-4e7a-9b24-30668b357d16\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.646064 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") pod \"143242b4-3aff-4e7a-9b24-30668b357d16\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.646139 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") pod \"143242b4-3aff-4e7a-9b24-30668b357d16\" (UID: \"143242b4-3aff-4e7a-9b24-30668b357d16\") " Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.646956 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities" (OuterVolumeSpecName: "utilities") pod "143242b4-3aff-4e7a-9b24-30668b357d16" (UID: "143242b4-3aff-4e7a-9b24-30668b357d16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.652070 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl" (OuterVolumeSpecName: "kube-api-access-bqswl") pod "143242b4-3aff-4e7a-9b24-30668b357d16" (UID: "143242b4-3aff-4e7a-9b24-30668b357d16"). InnerVolumeSpecName "kube-api-access-bqswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.700401 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "143242b4-3aff-4e7a-9b24-30668b357d16" (UID: "143242b4-3aff-4e7a-9b24-30668b357d16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.748849 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqswl\" (UniqueName: \"kubernetes.io/projected/143242b4-3aff-4e7a-9b24-30668b357d16-kube-api-access-bqswl\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.748879 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:26 crc kubenswrapper[4902]: I0121 16:51:26.748889 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143242b4-3aff-4e7a-9b24-30668b357d16-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.404142 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wg99" event={"ID":"143242b4-3aff-4e7a-9b24-30668b357d16","Type":"ContainerDied","Data":"bf23ec466aed83cc1d86a8222ea846f4133c82d2cb505fed79112940bbd33e10"} Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.404511 4902 scope.go:117] "RemoveContainer" containerID="9d8f98e938d60b36e7af87ed0a3021f240993a31ebde5f6419b6b63c546c26a1" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.404710 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wg99" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.429289 4902 scope.go:117] "RemoveContainer" containerID="b26802e67045e55d7b02cb67bbfff611c338ee5ed68aa72686be32a75ef95e8f" Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.446577 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.457480 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wg99"] Jan 21 16:51:27 crc kubenswrapper[4902]: I0121 16:51:27.463833 4902 scope.go:117] "RemoveContainer" containerID="39fb987f6fc04b8891dfae5f80b4bcb45c7d727cc3da63ac2ce75a0959709d9c" Jan 21 16:51:28 crc kubenswrapper[4902]: I0121 16:51:28.315989 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" path="/var/lib/kubelet/pods/143242b4-3aff-4e7a-9b24-30668b357d16/volumes" Jan 21 16:51:35 crc kubenswrapper[4902]: I0121 16:51:35.301242 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:35 crc kubenswrapper[4902]: E0121 16:51:35.303430 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:51:48 crc kubenswrapper[4902]: I0121 16:51:48.302286 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:51:48 crc kubenswrapper[4902]: E0121 16:51:48.304485 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:00 crc kubenswrapper[4902]: I0121 16:52:00.294966 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:00 crc kubenswrapper[4902]: E0121 16:52:00.295874 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:12 crc kubenswrapper[4902]: I0121 16:52:12.295545 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:12 crc kubenswrapper[4902]: E0121 16:52:12.296731 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:25 crc kubenswrapper[4902]: I0121 16:52:25.296148 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:25 crc kubenswrapper[4902]: E0121 16:52:25.297485 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:36 crc kubenswrapper[4902]: I0121 16:52:36.533024 4902 scope.go:117] "RemoveContainer" containerID="31fc280c2d7e2874d5d3ebbb00ce9f04de5add4709a413f82f3fb2ff907c3669" Jan 21 16:52:39 crc kubenswrapper[4902]: I0121 16:52:39.295513 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:39 crc kubenswrapper[4902]: E0121 16:52:39.296741 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:52:54 crc kubenswrapper[4902]: I0121 16:52:54.295903 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:52:54 crc kubenswrapper[4902]: E0121 16:52:54.296592 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.168278 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:01 crc kubenswrapper[4902]: E0121 16:53:01.172580 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172625 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" Jan 21 16:53:01 crc kubenswrapper[4902]: E0121 16:53:01.172672 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-content" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172680 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-content" Jan 21 16:53:01 crc kubenswrapper[4902]: E0121 16:53:01.172695 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-utilities" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172704 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="extract-utilities" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.172963 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="143242b4-3aff-4e7a-9b24-30668b357d16" containerName="registry-server" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.174864 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.182463 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.359319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.359396 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.359506 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.461193 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.461422 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.461482 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.462198 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.462775 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.492151 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"redhat-operators-54qdd\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.504075 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:01 crc kubenswrapper[4902]: I0121 16:53:01.991194 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.428711 4902 generic.go:334] "Generic (PLEG): container finished" podID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" exitCode=0 Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.428767 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84"} Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.428798 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerStarted","Data":"ef9cc39908a6f856e15324133e405fe3affe18c520ff62385e5a7910a73603f5"} Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.966463 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.972956 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:02 crc kubenswrapper[4902]: I0121 16:53:02.978629 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.104603 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.105148 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.105612 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209008 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209312 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209709 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.209889 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.239137 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"certified-operators-96gtf\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.314460 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:03 crc kubenswrapper[4902]: I0121 16:53:03.851002 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:03 crc kubenswrapper[4902]: W0121 16:53:03.854006 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d7d3b40_1a5c_452d_864a_7e67d8c1e7bd.slice/crio-f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0 WatchSource:0}: Error finding container f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0: Status 404 returned error can't find the container with id f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0 Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.460070 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerStarted","Data":"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10"} Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.461593 4902 generic.go:334] "Generic (PLEG): container finished" podID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" exitCode=0 Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.461621 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984"} Jan 21 16:53:04 crc kubenswrapper[4902]: I0121 16:53:04.461636 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerStarted","Data":"f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0"} Jan 21 16:53:05 crc kubenswrapper[4902]: I0121 16:53:05.295541 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:05 crc kubenswrapper[4902]: E0121 16:53:05.296143 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:05 crc kubenswrapper[4902]: I0121 16:53:05.471800 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerStarted","Data":"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0"} Jan 21 16:53:06 crc kubenswrapper[4902]: I0121 16:53:06.482229 4902 generic.go:334] "Generic (PLEG): container finished" podID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" exitCode=0 Jan 21 16:53:06 crc kubenswrapper[4902]: I0121 16:53:06.482292 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10"} Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.507931 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerStarted","Data":"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea"} Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.513532 4902 generic.go:334] "Generic (PLEG): container finished" podID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" exitCode=0 Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.513576 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0"} Jan 21 16:53:07 crc kubenswrapper[4902]: I0121 16:53:07.532819 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54qdd" podStartSLOduration=1.9059979249999999 podStartE2EDuration="6.532802882s" podCreationTimestamp="2026-01-21 16:53:01 +0000 UTC" firstStartedPulling="2026-01-21 16:53:02.437024812 +0000 UTC m=+8344.513857841" lastFinishedPulling="2026-01-21 16:53:07.063829769 +0000 UTC m=+8349.140662798" observedRunningTime="2026-01-21 16:53:07.529509089 +0000 UTC m=+8349.606342128" watchObservedRunningTime="2026-01-21 16:53:07.532802882 +0000 UTC m=+8349.609635911" Jan 21 16:53:08 crc kubenswrapper[4902]: I0121 16:53:08.527758 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerStarted","Data":"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1"} Jan 21 16:53:11 crc kubenswrapper[4902]: I0121 16:53:11.504808 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:11 crc kubenswrapper[4902]: I0121 16:53:11.507709 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:12 crc kubenswrapper[4902]: I0121 16:53:12.567054 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-54qdd" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" probeResult="failure" output=< Jan 21 16:53:12 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 16:53:12 crc kubenswrapper[4902]: > Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.315018 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.315127 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.363076 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.387378 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96gtf" podStartSLOduration=7.840538326 podStartE2EDuration="11.387032046s" podCreationTimestamp="2026-01-21 16:53:02 +0000 UTC" firstStartedPulling="2026-01-21 16:53:04.463323941 +0000 UTC m=+8346.540156960" lastFinishedPulling="2026-01-21 16:53:08.009817651 +0000 UTC m=+8350.086650680" observedRunningTime="2026-01-21 16:53:08.55458262 +0000 UTC m=+8350.631415659" watchObservedRunningTime="2026-01-21 16:53:13.387032046 +0000 UTC m=+8355.463865075" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.622584 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:13 crc kubenswrapper[4902]: I0121 16:53:13.749910 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:15 crc kubenswrapper[4902]: I0121 16:53:15.592113 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-96gtf" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" containerID="cri-o://f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" gracePeriod=2 Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.087517 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.224624 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") pod \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.224763 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") pod \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.224895 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") pod \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\" (UID: \"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd\") " Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.225903 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities" (OuterVolumeSpecName: "utilities") pod "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" (UID: "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.231195 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw" (OuterVolumeSpecName: "kube-api-access-hv8gw") pod "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" (UID: "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd"). InnerVolumeSpecName "kube-api-access-hv8gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.268866 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" (UID: "0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.328751 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv8gw\" (UniqueName: \"kubernetes.io/projected/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-kube-api-access-hv8gw\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.328791 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.328803 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607010 4902 generic.go:334] "Generic (PLEG): container finished" podID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" exitCode=0 Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607084 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1"} Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607111 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96gtf" event={"ID":"0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd","Type":"ContainerDied","Data":"f292bf2a284eb647def3d5cb87f512309efa8438e63e51fdcf6e4e9a64cf90f0"} Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607127 4902 scope.go:117] "RemoveContainer" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.607471 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96gtf" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.642710 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.656751 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-96gtf"] Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.659789 4902 scope.go:117] "RemoveContainer" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.688076 4902 scope.go:117] "RemoveContainer" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.740581 4902 scope.go:117] "RemoveContainer" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" Jan 21 16:53:16 crc kubenswrapper[4902]: E0121 16:53:16.741026 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1\": container with ID starting with f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1 not found: ID does not exist" containerID="f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741091 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1"} err="failed to get container status \"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1\": rpc error: code = NotFound desc = could not find container \"f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1\": container with ID starting with f1a8c158fc2bf1d0abdfc6859037a0940590acb0b95ed71ee73ad03d30cf26e1 not found: ID does not exist" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741121 4902 scope.go:117] "RemoveContainer" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" Jan 21 16:53:16 crc kubenswrapper[4902]: E0121 16:53:16.741633 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0\": container with ID starting with 7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0 not found: ID does not exist" containerID="7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741664 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0"} err="failed to get container status \"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0\": rpc error: code = NotFound desc = could not find container \"7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0\": container with ID starting with 7268e6dd157fe43cbbe8c45fd47080f6c430f5a549e86befcb425888d045f3b0 not found: ID does not exist" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.741697 4902 scope.go:117] "RemoveContainer" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" Jan 21 16:53:16 crc kubenswrapper[4902]: E0121 16:53:16.742118 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984\": container with ID starting with 5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984 not found: ID does not exist" containerID="5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984" Jan 21 16:53:16 crc kubenswrapper[4902]: I0121 16:53:16.742147 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984"} err="failed to get container status \"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984\": rpc error: code = NotFound desc = could not find container \"5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984\": container with ID starting with 5965a2aece2742e9ac6eafb08915f1944b45e3c3ffab36ecf483c021eab47984 not found: ID does not exist" Jan 21 16:53:18 crc kubenswrapper[4902]: I0121 16:53:18.308214 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" path="/var/lib/kubelet/pods/0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd/volumes" Jan 21 16:53:19 crc kubenswrapper[4902]: I0121 16:53:19.295446 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:19 crc kubenswrapper[4902]: E0121 16:53:19.296028 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:21 crc kubenswrapper[4902]: I0121 16:53:21.571404 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:21 crc kubenswrapper[4902]: I0121 16:53:21.631670 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:21 crc kubenswrapper[4902]: I0121 16:53:21.816369 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:22 crc kubenswrapper[4902]: I0121 16:53:22.663328 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-54qdd" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" containerID="cri-o://70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" gracePeriod=2 Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.191924 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.310891 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") pod \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.311003 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") pod \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.311072 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") pod \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\" (UID: \"4e0d27aa-f265-4f75-b74b-8ec006ae7151\") " Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.312202 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities" (OuterVolumeSpecName: "utilities") pod "4e0d27aa-f265-4f75-b74b-8ec006ae7151" (UID: "4e0d27aa-f265-4f75-b74b-8ec006ae7151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.318120 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh" (OuterVolumeSpecName: "kube-api-access-6bpnh") pod "4e0d27aa-f265-4f75-b74b-8ec006ae7151" (UID: "4e0d27aa-f265-4f75-b74b-8ec006ae7151"). InnerVolumeSpecName "kube-api-access-6bpnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.413034 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bpnh\" (UniqueName: \"kubernetes.io/projected/4e0d27aa-f265-4f75-b74b-8ec006ae7151-kube-api-access-6bpnh\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.413072 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.442245 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e0d27aa-f265-4f75-b74b-8ec006ae7151" (UID: "4e0d27aa-f265-4f75-b74b-8ec006ae7151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.514883 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0d27aa-f265-4f75-b74b-8ec006ae7151-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672892 4902 generic.go:334] "Generic (PLEG): container finished" podID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" exitCode=0 Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672934 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea"} Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672966 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54qdd" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.672979 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54qdd" event={"ID":"4e0d27aa-f265-4f75-b74b-8ec006ae7151","Type":"ContainerDied","Data":"ef9cc39908a6f856e15324133e405fe3affe18c520ff62385e5a7910a73603f5"} Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.673009 4902 scope.go:117] "RemoveContainer" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.710289 4902 scope.go:117] "RemoveContainer" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.712649 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.735349 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-54qdd"] Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.751898 4902 scope.go:117] "RemoveContainer" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.797694 4902 scope.go:117] "RemoveContainer" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" Jan 21 16:53:23 crc kubenswrapper[4902]: E0121 16:53:23.798317 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea\": container with ID starting with 70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea not found: ID does not exist" containerID="70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798354 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea"} err="failed to get container status \"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea\": rpc error: code = NotFound desc = could not find container \"70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea\": container with ID starting with 70cab8609e11ccf3f49f001b5387ca51d1b32b12ed8fda56f151db3d5df463ea not found: ID does not exist" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798382 4902 scope.go:117] "RemoveContainer" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" Jan 21 16:53:23 crc kubenswrapper[4902]: E0121 16:53:23.798679 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10\": container with ID starting with 1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10 not found: ID does not exist" containerID="1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798709 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10"} err="failed to get container status \"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10\": rpc error: code = NotFound desc = could not find container \"1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10\": container with ID starting with 1947ec42fd69cb76effdc0a8c28c8eabe6bb409dc74b40899ca094bce00b5f10 not found: ID does not exist" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.798728 4902 scope.go:117] "RemoveContainer" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" Jan 21 16:53:23 crc kubenswrapper[4902]: E0121 16:53:23.799574 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84\": container with ID starting with 275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84 not found: ID does not exist" containerID="275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84" Jan 21 16:53:23 crc kubenswrapper[4902]: I0121 16:53:23.799605 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84"} err="failed to get container status \"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84\": rpc error: code = NotFound desc = could not find container \"275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84\": container with ID starting with 275d539d248d506426dd37cf223fc6b72a621e4ab41cebe0898c7d344b6b6b84 not found: ID does not exist" Jan 21 16:53:24 crc kubenswrapper[4902]: I0121 16:53:24.308185 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" path="/var/lib/kubelet/pods/4e0d27aa-f265-4f75-b74b-8ec006ae7151/volumes" Jan 21 16:53:31 crc kubenswrapper[4902]: I0121 16:53:31.297324 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:31 crc kubenswrapper[4902]: E0121 16:53:31.299158 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:44 crc kubenswrapper[4902]: I0121 16:53:44.296771 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:44 crc kubenswrapper[4902]: E0121 16:53:44.297607 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:53:56 crc kubenswrapper[4902]: I0121 16:53:56.296544 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:53:56 crc kubenswrapper[4902]: E0121 16:53:56.297823 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:11 crc kubenswrapper[4902]: I0121 16:54:11.295890 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:11 crc kubenswrapper[4902]: E0121 16:54:11.299205 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:23 crc kubenswrapper[4902]: I0121 16:54:23.295095 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:23 crc kubenswrapper[4902]: E0121 16:54:23.296119 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:35 crc kubenswrapper[4902]: I0121 16:54:35.298754 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:35 crc kubenswrapper[4902]: E0121 16:54:35.299576 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:54:49 crc kubenswrapper[4902]: I0121 16:54:49.297384 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:54:49 crc kubenswrapper[4902]: E0121 16:54:49.304792 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:55:02 crc kubenswrapper[4902]: I0121 16:55:02.294940 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:55:02 crc kubenswrapper[4902]: E0121 16:55:02.295646 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:55:13 crc kubenswrapper[4902]: I0121 16:55:13.294889 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:55:13 crc kubenswrapper[4902]: E0121 16:55:13.296269 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 16:55:24 crc kubenswrapper[4902]: I0121 16:55:24.294871 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:55:25 crc kubenswrapper[4902]: I0121 16:55:25.003700 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1"} Jan 21 16:57:47 crc kubenswrapper[4902]: I0121 16:57:47.770221 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:57:47 crc kubenswrapper[4902]: I0121 16:57:47.770844 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:58:17 crc kubenswrapper[4902]: I0121 16:58:17.769557 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:17 crc kubenswrapper[4902]: I0121 16:58:17.770110 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.770332 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.770847 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.770914 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.771827 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:58:47 crc kubenswrapper[4902]: I0121 16:58:47.771903 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1" gracePeriod=600 Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.829525 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1" exitCode=0 Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.829585 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1"} Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.830027 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd"} Jan 21 16:58:48 crc kubenswrapper[4902]: I0121 16:58:48.830053 4902 scope.go:117] "RemoveContainer" containerID="c5ed6610a61da20b87c1b6980fc52c3c686b0eb88ff6c5086ebaa41e65f98fdc" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.431687 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432615 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432628 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432658 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432681 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432688 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="extract-content" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432700 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432706 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432713 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432718 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="extract-utilities" Jan 21 16:58:50 crc kubenswrapper[4902]: E0121 16:58:50.432733 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432739 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432929 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0d27aa-f265-4f75-b74b-8ec006ae7151" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.432955 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d7d3b40-1a5c-452d-864a-7e67d8c1e7bd" containerName="registry-server" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.437647 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.446309 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.570404 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.570493 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.570576 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.672246 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.672580 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.672675 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.673289 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.673334 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.695507 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"redhat-marketplace-rs8bm\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:50 crc kubenswrapper[4902]: I0121 16:58:50.757902 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.296539 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:58:51 crc kubenswrapper[4902]: W0121 16:58:51.299092 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06359608_b433_4b84_8058_97775f4976ff.slice/crio-a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f WatchSource:0}: Error finding container a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f: Status 404 returned error can't find the container with id a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.858352 4902 generic.go:334] "Generic (PLEG): container finished" podID="06359608-b433-4b84-8058-97775f4976ff" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" exitCode=0 Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.858420 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686"} Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.858664 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerStarted","Data":"a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f"} Jan 21 16:58:51 crc kubenswrapper[4902]: I0121 16:58:51.861058 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:58:53 crc kubenswrapper[4902]: I0121 16:58:53.882273 4902 generic.go:334] "Generic (PLEG): container finished" podID="06359608-b433-4b84-8058-97775f4976ff" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" exitCode=0 Jan 21 16:58:53 crc kubenswrapper[4902]: I0121 16:58:53.882388 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860"} Jan 21 16:58:55 crc kubenswrapper[4902]: I0121 16:58:55.905493 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerStarted","Data":"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9"} Jan 21 16:58:55 crc kubenswrapper[4902]: I0121 16:58:55.930599 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rs8bm" podStartSLOduration=3.462393264 podStartE2EDuration="5.930581152s" podCreationTimestamp="2026-01-21 16:58:50 +0000 UTC" firstStartedPulling="2026-01-21 16:58:51.860798079 +0000 UTC m=+8693.937631108" lastFinishedPulling="2026-01-21 16:58:54.328985977 +0000 UTC m=+8696.405818996" observedRunningTime="2026-01-21 16:58:55.924015817 +0000 UTC m=+8698.000848856" watchObservedRunningTime="2026-01-21 16:58:55.930581152 +0000 UTC m=+8698.007414181" Jan 21 16:59:00 crc kubenswrapper[4902]: I0121 16:59:00.758486 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:00 crc kubenswrapper[4902]: I0121 16:59:00.758968 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:00 crc kubenswrapper[4902]: I0121 16:59:00.814850 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:01 crc kubenswrapper[4902]: I0121 16:59:01.019503 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:01 crc kubenswrapper[4902]: I0121 16:59:01.084150 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:59:02 crc kubenswrapper[4902]: I0121 16:59:02.970101 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rs8bm" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" containerID="cri-o://415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" gracePeriod=2 Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.452995 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.602712 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") pod \"06359608-b433-4b84-8058-97775f4976ff\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.602898 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") pod \"06359608-b433-4b84-8058-97775f4976ff\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.602994 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") pod \"06359608-b433-4b84-8058-97775f4976ff\" (UID: \"06359608-b433-4b84-8058-97775f4976ff\") " Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.603789 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities" (OuterVolumeSpecName: "utilities") pod "06359608-b433-4b84-8058-97775f4976ff" (UID: "06359608-b433-4b84-8058-97775f4976ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.608193 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s" (OuterVolumeSpecName: "kube-api-access-hvj9s") pod "06359608-b433-4b84-8058-97775f4976ff" (UID: "06359608-b433-4b84-8058-97775f4976ff"). InnerVolumeSpecName "kube-api-access-hvj9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.625242 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06359608-b433-4b84-8058-97775f4976ff" (UID: "06359608-b433-4b84-8058-97775f4976ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.705697 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.705737 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvj9s\" (UniqueName: \"kubernetes.io/projected/06359608-b433-4b84-8058-97775f4976ff-kube-api-access-hvj9s\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.705749 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06359608-b433-4b84-8058-97775f4976ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.984460 4902 generic.go:334] "Generic (PLEG): container finished" podID="06359608-b433-4b84-8058-97775f4976ff" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" exitCode=0 Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.984507 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9"} Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.985451 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs8bm" event={"ID":"06359608-b433-4b84-8058-97775f4976ff","Type":"ContainerDied","Data":"a5bfc0b8a99730dfe546e6d8b4235e9b0d61cbcfbd4125eacac833383f98740f"} Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.984572 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs8bm" Jan 21 16:59:03 crc kubenswrapper[4902]: I0121 16:59:03.985477 4902 scope.go:117] "RemoveContainer" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.013616 4902 scope.go:117] "RemoveContainer" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.035918 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.045546 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs8bm"] Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.057816 4902 scope.go:117] "RemoveContainer" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.105605 4902 scope.go:117] "RemoveContainer" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" Jan 21 16:59:04 crc kubenswrapper[4902]: E0121 16:59:04.106157 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9\": container with ID starting with 415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9 not found: ID does not exist" containerID="415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106205 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9"} err="failed to get container status \"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9\": rpc error: code = NotFound desc = could not find container \"415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9\": container with ID starting with 415eb33e58e5c0fda33f71aec881db7615dca981f2bbdf94091553445fa946c9 not found: ID does not exist" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106227 4902 scope.go:117] "RemoveContainer" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" Jan 21 16:59:04 crc kubenswrapper[4902]: E0121 16:59:04.106438 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860\": container with ID starting with 938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860 not found: ID does not exist" containerID="938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106470 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860"} err="failed to get container status \"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860\": rpc error: code = NotFound desc = could not find container \"938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860\": container with ID starting with 938d283e329d2ca24ffbe1c576d31c6474a89d21dd9ab2d69812e949032b5860 not found: ID does not exist" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106488 4902 scope.go:117] "RemoveContainer" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" Jan 21 16:59:04 crc kubenswrapper[4902]: E0121 16:59:04.106730 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686\": container with ID starting with 6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686 not found: ID does not exist" containerID="6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.106747 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686"} err="failed to get container status \"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686\": rpc error: code = NotFound desc = could not find container \"6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686\": container with ID starting with 6a3ce4f8064bdf34495fa62ef85475e96bbba60e6e02b25e4a3738e1c4a8f686 not found: ID does not exist" Jan 21 16:59:04 crc kubenswrapper[4902]: I0121 16:59:04.307804 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06359608-b433-4b84-8058-97775f4976ff" path="/var/lib/kubelet/pods/06359608-b433-4b84-8058-97775f4976ff/volumes" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.156266 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5"] Jan 21 17:00:00 crc kubenswrapper[4902]: E0121 17:00:00.157133 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-content" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157145 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-content" Jan 21 17:00:00 crc kubenswrapper[4902]: E0121 17:00:00.157167 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157173 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4902]: E0121 17:00:00.157196 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-utilities" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157202 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="extract-utilities" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.157406 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="06359608-b433-4b84-8058-97775f4976ff" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.158127 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.165393 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.165405 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.175316 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5"] Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.321670 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.321835 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.322006 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.424490 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.424631 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.424850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.425718 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.436653 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.461105 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"collect-profiles-29483580-qxkr5\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:00 crc kubenswrapper[4902]: I0121 17:00:00.487288 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.031146 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5"] Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.592741 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerStarted","Data":"367859eb580777758fc90831db5c3fe7bd94cfec159c3396efcd6037139700cd"} Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.592783 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerStarted","Data":"da083988cf2107abe293a910cba9bba2569d0f58d7e960099a1081674b442c1e"} Jan 21 17:00:01 crc kubenswrapper[4902]: I0121 17:00:01.614600 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" podStartSLOduration=1.614582266 podStartE2EDuration="1.614582266s" podCreationTimestamp="2026-01-21 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:00:01.605003996 +0000 UTC m=+8763.681837025" watchObservedRunningTime="2026-01-21 17:00:01.614582266 +0000 UTC m=+8763.691415295" Jan 21 17:00:02 crc kubenswrapper[4902]: I0121 17:00:02.601573 4902 generic.go:334] "Generic (PLEG): container finished" podID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerID="367859eb580777758fc90831db5c3fe7bd94cfec159c3396efcd6037139700cd" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4902]: I0121 17:00:02.601787 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerDied","Data":"367859eb580777758fc90831db5c3fe7bd94cfec159c3396efcd6037139700cd"} Jan 21 17:00:03 crc kubenswrapper[4902]: I0121 17:00:03.998361 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.122340 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") pod \"0b4bb0bd-43ae-4455-8695-1123a4597e26\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.122451 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") pod \"0b4bb0bd-43ae-4455-8695-1123a4597e26\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.122760 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") pod \"0b4bb0bd-43ae-4455-8695-1123a4597e26\" (UID: \"0b4bb0bd-43ae-4455-8695-1123a4597e26\") " Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.123415 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b4bb0bd-43ae-4455-8695-1123a4597e26" (UID: "0b4bb0bd-43ae-4455-8695-1123a4597e26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.131589 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0b4bb0bd-43ae-4455-8695-1123a4597e26" (UID: "0b4bb0bd-43ae-4455-8695-1123a4597e26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.132754 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l" (OuterVolumeSpecName: "kube-api-access-wqj9l") pod "0b4bb0bd-43ae-4455-8695-1123a4597e26" (UID: "0b4bb0bd-43ae-4455-8695-1123a4597e26"). InnerVolumeSpecName "kube-api-access-wqj9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.224774 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqj9l\" (UniqueName: \"kubernetes.io/projected/0b4bb0bd-43ae-4455-8695-1123a4597e26-kube-api-access-wqj9l\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.224814 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b4bb0bd-43ae-4455-8695-1123a4597e26-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.224825 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0b4bb0bd-43ae-4455-8695-1123a4597e26-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.626224 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" event={"ID":"0b4bb0bd-43ae-4455-8695-1123a4597e26","Type":"ContainerDied","Data":"da083988cf2107abe293a910cba9bba2569d0f58d7e960099a1081674b442c1e"} Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.626269 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da083988cf2107abe293a910cba9bba2569d0f58d7e960099a1081674b442c1e" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.626335 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-qxkr5" Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.710939 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 17:00:04 crc kubenswrapper[4902]: I0121 17:00:04.721567 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-pvcf2"] Jan 21 17:00:06 crc kubenswrapper[4902]: I0121 17:00:06.318732 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3234509-8b7b-4b77-9a80-f496d21a727e" path="/var/lib/kubelet/pods/c3234509-8b7b-4b77-9a80-f496d21a727e/volumes" Jan 21 17:00:36 crc kubenswrapper[4902]: I0121 17:00:36.841455 4902 scope.go:117] "RemoveContainer" containerID="4e8300ed14fa669d6234d502917b52e699b6641dda6ef60268cdbc2afafd8313" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.166590 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483581-p4mmj"] Jan 21 17:01:00 crc kubenswrapper[4902]: E0121 17:01:00.168640 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.168895 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.169544 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4bb0bd-43ae-4455-8695-1123a4597e26" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.171436 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.182350 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-p4mmj"] Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223288 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223386 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223436 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.223694 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325636 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325798 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325850 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.325997 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.333663 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.334342 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.343120 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.352327 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"keystone-cron-29483581-p4mmj\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:00 crc kubenswrapper[4902]: I0121 17:01:00.514161 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:01 crc kubenswrapper[4902]: I0121 17:01:01.009167 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-p4mmj"] Jan 21 17:01:01 crc kubenswrapper[4902]: W0121 17:01:01.020938 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d2ca2cf_e9dd_4a00_b422_a84ffd14648c.slice/crio-a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab WatchSource:0}: Error finding container a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab: Status 404 returned error can't find the container with id a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab Jan 21 17:01:01 crc kubenswrapper[4902]: I0121 17:01:01.311522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerStarted","Data":"80e4b0de8f14cb4dd8ffc7536921b01171eab937e2dd0722aac183b116cb2d3e"} Jan 21 17:01:01 crc kubenswrapper[4902]: I0121 17:01:01.311968 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerStarted","Data":"a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab"} Jan 21 17:01:04 crc kubenswrapper[4902]: I0121 17:01:04.361156 4902 generic.go:334] "Generic (PLEG): container finished" podID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerID="80e4b0de8f14cb4dd8ffc7536921b01171eab937e2dd0722aac183b116cb2d3e" exitCode=0 Jan 21 17:01:04 crc kubenswrapper[4902]: I0121 17:01:04.361229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerDied","Data":"80e4b0de8f14cb4dd8ffc7536921b01171eab937e2dd0722aac183b116cb2d3e"} Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.803644 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.882766 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.883795 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.883993 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.884178 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") pod \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\" (UID: \"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c\") " Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.889206 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.894711 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v" (OuterVolumeSpecName: "kube-api-access-hb88v") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "kube-api-access-hb88v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.915634 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.960336 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data" (OuterVolumeSpecName: "config-data") pod "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" (UID: "2d2ca2cf-e9dd-4a00-b422-a84ffd14648c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988557 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb88v\" (UniqueName: \"kubernetes.io/projected/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-kube-api-access-hb88v\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988592 4902 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988602 4902 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:05 crc kubenswrapper[4902]: I0121 17:01:05.988611 4902 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2ca2cf-e9dd-4a00-b422-a84ffd14648c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4902]: I0121 17:01:06.400672 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-p4mmj" Jan 21 17:01:06 crc kubenswrapper[4902]: I0121 17:01:06.400712 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-p4mmj" event={"ID":"2d2ca2cf-e9dd-4a00-b422-a84ffd14648c","Type":"ContainerDied","Data":"a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab"} Jan 21 17:01:06 crc kubenswrapper[4902]: I0121 17:01:06.400753 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a347605711377bdcb3ac91b7e81ecb742b4ca2002afc6ac3d222e5c71a6460ab" Jan 21 17:01:17 crc kubenswrapper[4902]: I0121 17:01:17.769813 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:01:17 crc kubenswrapper[4902]: I0121 17:01:17.770374 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.148456 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:44 crc kubenswrapper[4902]: E0121 17:01:44.149618 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerName="keystone-cron" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.149638 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerName="keystone-cron" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.149911 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2ca2cf-e9dd-4a00-b422-a84ffd14648c" containerName="keystone-cron" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.152122 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.164474 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.176919 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.177011 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.178383 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.280805 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.281331 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.281489 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.282386 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.282435 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.319686 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"community-operators-wtcd8\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:44 crc kubenswrapper[4902]: I0121 17:01:44.483557 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.202443 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.917501 4902 generic.go:334] "Generic (PLEG): container finished" podID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" exitCode=0 Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.917679 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3"} Jan 21 17:01:45 crc kubenswrapper[4902]: I0121 17:01:45.917764 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerStarted","Data":"9a0d90b266baaa13b235a2e441a52fffd29cb0edf367e73f91a2c5e00d545288"} Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.771478 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.772488 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.936428 4902 generic.go:334] "Generic (PLEG): container finished" podID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" exitCode=0 Jan 21 17:01:47 crc kubenswrapper[4902]: I0121 17:01:47.936483 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a"} Jan 21 17:01:48 crc kubenswrapper[4902]: I0121 17:01:48.949107 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerStarted","Data":"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46"} Jan 21 17:01:48 crc kubenswrapper[4902]: I0121 17:01:48.975718 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wtcd8" podStartSLOduration=2.523178127 podStartE2EDuration="4.975702774s" podCreationTimestamp="2026-01-21 17:01:44 +0000 UTC" firstStartedPulling="2026-01-21 17:01:45.920151275 +0000 UTC m=+8867.996984304" lastFinishedPulling="2026-01-21 17:01:48.372675922 +0000 UTC m=+8870.449508951" observedRunningTime="2026-01-21 17:01:48.973494781 +0000 UTC m=+8871.050327810" watchObservedRunningTime="2026-01-21 17:01:48.975702774 +0000 UTC m=+8871.052535793" Jan 21 17:01:54 crc kubenswrapper[4902]: I0121 17:01:54.484779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:54 crc kubenswrapper[4902]: I0121 17:01:54.486218 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:54 crc kubenswrapper[4902]: I0121 17:01:54.556257 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:55 crc kubenswrapper[4902]: I0121 17:01:55.063393 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:55 crc kubenswrapper[4902]: I0121 17:01:55.147094 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.034878 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wtcd8" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" containerID="cri-o://b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" gracePeriod=2 Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.550625 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.563869 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") pod \"e63a3374-1941-4924-9ddf-e2638ebd9da5\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.564436 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") pod \"e63a3374-1941-4924-9ddf-e2638ebd9da5\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.564653 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") pod \"e63a3374-1941-4924-9ddf-e2638ebd9da5\" (UID: \"e63a3374-1941-4924-9ddf-e2638ebd9da5\") " Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.567078 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities" (OuterVolumeSpecName: "utilities") pod "e63a3374-1941-4924-9ddf-e2638ebd9da5" (UID: "e63a3374-1941-4924-9ddf-e2638ebd9da5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.576593 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7" (OuterVolumeSpecName: "kube-api-access-8q5t7") pod "e63a3374-1941-4924-9ddf-e2638ebd9da5" (UID: "e63a3374-1941-4924-9ddf-e2638ebd9da5"). InnerVolumeSpecName "kube-api-access-8q5t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.667449 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q5t7\" (UniqueName: \"kubernetes.io/projected/e63a3374-1941-4924-9ddf-e2638ebd9da5-kube-api-access-8q5t7\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.667479 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.757516 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e63a3374-1941-4924-9ddf-e2638ebd9da5" (UID: "e63a3374-1941-4924-9ddf-e2638ebd9da5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:01:57 crc kubenswrapper[4902]: I0121 17:01:57.768915 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63a3374-1941-4924-9ddf-e2638ebd9da5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.051752 4902 generic.go:334] "Generic (PLEG): container finished" podID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" exitCode=0 Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.051801 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46"} Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.052114 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtcd8" event={"ID":"e63a3374-1941-4924-9ddf-e2638ebd9da5","Type":"ContainerDied","Data":"9a0d90b266baaa13b235a2e441a52fffd29cb0edf367e73f91a2c5e00d545288"} Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.052140 4902 scope.go:117] "RemoveContainer" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.051878 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtcd8" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.087069 4902 scope.go:117] "RemoveContainer" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.102189 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.120334 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wtcd8"] Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.130622 4902 scope.go:117] "RemoveContainer" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.180299 4902 scope.go:117] "RemoveContainer" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" Jan 21 17:01:58 crc kubenswrapper[4902]: E0121 17:01:58.180633 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46\": container with ID starting with b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46 not found: ID does not exist" containerID="b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.180669 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46"} err="failed to get container status \"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46\": rpc error: code = NotFound desc = could not find container \"b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46\": container with ID starting with b01a0b38890cb8eaf0df57921bf7642ba8b30b54039bdbc1dea9d1d6e9b0ff46 not found: ID does not exist" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.180691 4902 scope.go:117] "RemoveContainer" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" Jan 21 17:01:58 crc kubenswrapper[4902]: E0121 17:01:58.181134 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a\": container with ID starting with 46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a not found: ID does not exist" containerID="46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.181197 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a"} err="failed to get container status \"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a\": rpc error: code = NotFound desc = could not find container \"46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a\": container with ID starting with 46074c0df4781f45f76d8de30b4831bcda3c222dcd955c4834c4068554d8889a not found: ID does not exist" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.181236 4902 scope.go:117] "RemoveContainer" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" Jan 21 17:01:58 crc kubenswrapper[4902]: E0121 17:01:58.181515 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3\": container with ID starting with f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3 not found: ID does not exist" containerID="f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.181536 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3"} err="failed to get container status \"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3\": rpc error: code = NotFound desc = could not find container \"f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3\": container with ID starting with f3c478b983036543d91a209a8db5de72a1c06a22268edc0eed2e10db53ff5bf3 not found: ID does not exist" Jan 21 17:01:58 crc kubenswrapper[4902]: I0121 17:01:58.340199 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" path="/var/lib/kubelet/pods/e63a3374-1941-4924-9ddf-e2638ebd9da5/volumes" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.769727 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.770317 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.770497 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.771519 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:02:17 crc kubenswrapper[4902]: I0121 17:02:17.771579 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" gracePeriod=600 Jan 21 17:02:17 crc kubenswrapper[4902]: E0121 17:02:17.898804 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.251933 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" exitCode=0 Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.251981 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd"} Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.252018 4902 scope.go:117] "RemoveContainer" containerID="9f776d5840d31ab607fa3e29bc18a92f4b5a166c8a644779f2d27b3187fe84b1" Jan 21 17:02:18 crc kubenswrapper[4902]: I0121 17:02:18.252866 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:18 crc kubenswrapper[4902]: E0121 17:02:18.253174 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:32 crc kubenswrapper[4902]: I0121 17:02:32.295945 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:32 crc kubenswrapper[4902]: E0121 17:02:32.296562 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:47 crc kubenswrapper[4902]: I0121 17:02:47.295631 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:47 crc kubenswrapper[4902]: E0121 17:02:47.296230 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:02:58 crc kubenswrapper[4902]: I0121 17:02:58.302485 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:02:58 crc kubenswrapper[4902]: E0121 17:02:58.303335 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:12 crc kubenswrapper[4902]: I0121 17:03:12.295372 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:12 crc kubenswrapper[4902]: E0121 17:03:12.296086 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:26 crc kubenswrapper[4902]: I0121 17:03:26.296088 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:26 crc kubenswrapper[4902]: E0121 17:03:26.296874 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.275774 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:35 crc kubenswrapper[4902]: E0121 17:03:35.277183 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277199 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" Jan 21 17:03:35 crc kubenswrapper[4902]: E0121 17:03:35.277215 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-content" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277221 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-content" Jan 21 17:03:35 crc kubenswrapper[4902]: E0121 17:03:35.277247 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-utilities" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277253 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="extract-utilities" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.277433 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63a3374-1941-4924-9ddf-e2638ebd9da5" containerName="registry-server" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.280456 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.300716 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.350569 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.350877 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.350967 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.452333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.452418 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.452518 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.453354 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.453765 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.496417 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"certified-operators-4bfxj\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:35 crc kubenswrapper[4902]: I0121 17:03:35.624711 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:36 crc kubenswrapper[4902]: I0121 17:03:36.327822 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:37 crc kubenswrapper[4902]: I0121 17:03:37.120375 4902 generic.go:334] "Generic (PLEG): container finished" podID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" exitCode=0 Jan 21 17:03:37 crc kubenswrapper[4902]: I0121 17:03:37.121092 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf"} Jan 21 17:03:37 crc kubenswrapper[4902]: I0121 17:03:37.121147 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerStarted","Data":"d0832327e40c84a599a387827caf79267c3e8b7bc23b9d54b39042684a4664a0"} Jan 21 17:03:38 crc kubenswrapper[4902]: I0121 17:03:38.131398 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerStarted","Data":"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5"} Jan 21 17:03:39 crc kubenswrapper[4902]: I0121 17:03:39.143863 4902 generic.go:334] "Generic (PLEG): container finished" podID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" exitCode=0 Jan 21 17:03:39 crc kubenswrapper[4902]: I0121 17:03:39.144210 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5"} Jan 21 17:03:39 crc kubenswrapper[4902]: I0121 17:03:39.294921 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:39 crc kubenswrapper[4902]: E0121 17:03:39.296066 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:03:41 crc kubenswrapper[4902]: I0121 17:03:41.172215 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerStarted","Data":"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280"} Jan 21 17:03:41 crc kubenswrapper[4902]: I0121 17:03:41.205536 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bfxj" podStartSLOduration=3.7859893810000003 podStartE2EDuration="6.205512808s" podCreationTimestamp="2026-01-21 17:03:35 +0000 UTC" firstStartedPulling="2026-01-21 17:03:37.123066517 +0000 UTC m=+8979.199899556" lastFinishedPulling="2026-01-21 17:03:39.542589944 +0000 UTC m=+8981.619422983" observedRunningTime="2026-01-21 17:03:41.197985796 +0000 UTC m=+8983.274818825" watchObservedRunningTime="2026-01-21 17:03:41.205512808 +0000 UTC m=+8983.282345837" Jan 21 17:03:45 crc kubenswrapper[4902]: I0121 17:03:45.625395 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:45 crc kubenswrapper[4902]: I0121 17:03:45.625939 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:45 crc kubenswrapper[4902]: I0121 17:03:45.679582 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:46 crc kubenswrapper[4902]: I0121 17:03:46.276541 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:46 crc kubenswrapper[4902]: I0121 17:03:46.348233 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.235121 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bfxj" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" containerID="cri-o://dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" gracePeriod=2 Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.731984 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.880097 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") pod \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.880211 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") pod \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.880432 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") pod \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\" (UID: \"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b\") " Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.881832 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities" (OuterVolumeSpecName: "utilities") pod "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" (UID: "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.892129 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt" (OuterVolumeSpecName: "kube-api-access-hkznt") pod "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" (UID: "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b"). InnerVolumeSpecName "kube-api-access-hkznt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.926801 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" (UID: "9606c3e6-5b1d-4c14-a719-7f3ede91dc0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.983373 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.983592 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:48 crc kubenswrapper[4902]: I0121 17:03:48.983606 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkznt\" (UniqueName: \"kubernetes.io/projected/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b-kube-api-access-hkznt\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.247404 4902 generic.go:334] "Generic (PLEG): container finished" podID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" exitCode=0 Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.247576 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280"} Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.248654 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bfxj" event={"ID":"9606c3e6-5b1d-4c14-a719-7f3ede91dc0b","Type":"ContainerDied","Data":"d0832327e40c84a599a387827caf79267c3e8b7bc23b9d54b39042684a4664a0"} Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.248733 4902 scope.go:117] "RemoveContainer" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.247647 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bfxj" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.270660 4902 scope.go:117] "RemoveContainer" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.298251 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.309036 4902 scope.go:117] "RemoveContainer" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.312714 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bfxj"] Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.383357 4902 scope.go:117] "RemoveContainer" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" Jan 21 17:03:49 crc kubenswrapper[4902]: E0121 17:03:49.383888 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280\": container with ID starting with dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280 not found: ID does not exist" containerID="dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.383932 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280"} err="failed to get container status \"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280\": rpc error: code = NotFound desc = could not find container \"dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280\": container with ID starting with dee48d7b59f03b4b58476e4a8d0c80902aa9440959644b28e988014ae75c5280 not found: ID does not exist" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.383960 4902 scope.go:117] "RemoveContainer" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" Jan 21 17:03:49 crc kubenswrapper[4902]: E0121 17:03:49.384319 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5\": container with ID starting with a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5 not found: ID does not exist" containerID="a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.384350 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5"} err="failed to get container status \"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5\": rpc error: code = NotFound desc = could not find container \"a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5\": container with ID starting with a08ed91e30c828a66649c4a4ddc774bcee735e2b3e3d97fe2db929050cb90aa5 not found: ID does not exist" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.384367 4902 scope.go:117] "RemoveContainer" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" Jan 21 17:03:49 crc kubenswrapper[4902]: E0121 17:03:49.384707 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf\": container with ID starting with 9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf not found: ID does not exist" containerID="9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf" Jan 21 17:03:49 crc kubenswrapper[4902]: I0121 17:03:49.384747 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf"} err="failed to get container status \"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf\": rpc error: code = NotFound desc = could not find container \"9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf\": container with ID starting with 9ebe50bc15b195b3f2ed09d6a3bc4685416b1efc661329b3c6a3ee49a683fedf not found: ID does not exist" Jan 21 17:03:50 crc kubenswrapper[4902]: I0121 17:03:50.305130 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" path="/var/lib/kubelet/pods/9606c3e6-5b1d-4c14-a719-7f3ede91dc0b/volumes" Jan 21 17:03:54 crc kubenswrapper[4902]: I0121 17:03:54.295176 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:03:54 crc kubenswrapper[4902]: E0121 17:03:54.295979 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:06 crc kubenswrapper[4902]: I0121 17:04:06.296072 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:06 crc kubenswrapper[4902]: E0121 17:04:06.297053 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:21 crc kubenswrapper[4902]: I0121 17:04:21.295133 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:21 crc kubenswrapper[4902]: E0121 17:04:21.296403 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:34 crc kubenswrapper[4902]: I0121 17:04:34.295704 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:34 crc kubenswrapper[4902]: E0121 17:04:34.296496 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:47 crc kubenswrapper[4902]: I0121 17:04:47.295289 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:47 crc kubenswrapper[4902]: E0121 17:04:47.297360 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:04:59 crc kubenswrapper[4902]: I0121 17:04:59.295594 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:04:59 crc kubenswrapper[4902]: E0121 17:04:59.296331 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.411996 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:10 crc kubenswrapper[4902]: E0121 17:05:10.413509 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-utilities" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.413531 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-utilities" Jan 21 17:05:10 crc kubenswrapper[4902]: E0121 17:05:10.413589 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-content" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.413601 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="extract-content" Jan 21 17:05:10 crc kubenswrapper[4902]: E0121 17:05:10.413624 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.413633 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.414079 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9606c3e6-5b1d-4c14-a719-7f3ede91dc0b" containerName="registry-server" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.416161 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.416271 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.476185 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.478031 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.478839 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.580883 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.581312 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.581543 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.581729 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.582083 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.603788 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"redhat-operators-fr7wb\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:10 crc kubenswrapper[4902]: I0121 17:05:10.751997 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:11 crc kubenswrapper[4902]: I0121 17:05:11.246055 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.178542 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" exitCode=0 Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.178627 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2"} Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.178880 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerStarted","Data":"c1f6bcbe0f7b0209e28301ed188965109ae19d5e6c3be4afa9e0fd5913151b90"} Jan 21 17:05:12 crc kubenswrapper[4902]: I0121 17:05:12.183272 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:05:14 crc kubenswrapper[4902]: I0121 17:05:14.200294 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerStarted","Data":"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c"} Jan 21 17:05:14 crc kubenswrapper[4902]: I0121 17:05:14.296395 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:14 crc kubenswrapper[4902]: E0121 17:05:14.296765 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:15 crc kubenswrapper[4902]: I0121 17:05:15.213267 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" exitCode=0 Jan 21 17:05:15 crc kubenswrapper[4902]: I0121 17:05:15.213454 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c"} Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.273535 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerStarted","Data":"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d"} Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.288822 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fr7wb" podStartSLOduration=2.484739224 podStartE2EDuration="10.288802212s" podCreationTimestamp="2026-01-21 17:05:10 +0000 UTC" firstStartedPulling="2026-01-21 17:05:12.182925134 +0000 UTC m=+9074.259758173" lastFinishedPulling="2026-01-21 17:05:19.986988092 +0000 UTC m=+9082.063821161" observedRunningTime="2026-01-21 17:05:20.288313378 +0000 UTC m=+9082.365146447" watchObservedRunningTime="2026-01-21 17:05:20.288802212 +0000 UTC m=+9082.365635251" Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.752810 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:20 crc kubenswrapper[4902]: I0121 17:05:20.753519 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:21 crc kubenswrapper[4902]: I0121 17:05:21.797447 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fr7wb" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" probeResult="failure" output=< Jan 21 17:05:21 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 17:05:21 crc kubenswrapper[4902]: > Jan 21 17:05:26 crc kubenswrapper[4902]: I0121 17:05:26.295730 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:26 crc kubenswrapper[4902]: E0121 17:05:26.296708 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:30 crc kubenswrapper[4902]: I0121 17:05:30.952281 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:31 crc kubenswrapper[4902]: I0121 17:05:31.007590 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:32 crc kubenswrapper[4902]: I0121 17:05:32.650321 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:32 crc kubenswrapper[4902]: I0121 17:05:32.650788 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fr7wb" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" containerID="cri-o://63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" gracePeriod=2 Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.152965 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.258746 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") pod \"ff52a854-5102-46f9-9d63-f3c3db18aab6\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.258805 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") pod \"ff52a854-5102-46f9-9d63-f3c3db18aab6\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.258922 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") pod \"ff52a854-5102-46f9-9d63-f3c3db18aab6\" (UID: \"ff52a854-5102-46f9-9d63-f3c3db18aab6\") " Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.259778 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities" (OuterVolumeSpecName: "utilities") pod "ff52a854-5102-46f9-9d63-f3c3db18aab6" (UID: "ff52a854-5102-46f9-9d63-f3c3db18aab6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.260380 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.265156 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw" (OuterVolumeSpecName: "kube-api-access-kr4fw") pod "ff52a854-5102-46f9-9d63-f3c3db18aab6" (UID: "ff52a854-5102-46f9-9d63-f3c3db18aab6"). InnerVolumeSpecName "kube-api-access-kr4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.362083 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr4fw\" (UniqueName: \"kubernetes.io/projected/ff52a854-5102-46f9-9d63-f3c3db18aab6-kube-api-access-kr4fw\") on node \"crc\" DevicePath \"\"" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.382781 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff52a854-5102-46f9-9d63-f3c3db18aab6" (UID: "ff52a854-5102-46f9-9d63-f3c3db18aab6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410785 4902 generic.go:334] "Generic (PLEG): container finished" podID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" exitCode=0 Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410833 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d"} Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410861 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr7wb" event={"ID":"ff52a854-5102-46f9-9d63-f3c3db18aab6","Type":"ContainerDied","Data":"c1f6bcbe0f7b0209e28301ed188965109ae19d5e6c3be4afa9e0fd5913151b90"} Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410864 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr7wb" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.410880 4902 scope.go:117] "RemoveContainer" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.455650 4902 scope.go:117] "RemoveContainer" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.465660 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff52a854-5102-46f9-9d63-f3c3db18aab6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.482617 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.496442 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fr7wb"] Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.544969 4902 scope.go:117] "RemoveContainer" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.563707 4902 scope.go:117] "RemoveContainer" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" Jan 21 17:05:33 crc kubenswrapper[4902]: E0121 17:05:33.564232 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d\": container with ID starting with 63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d not found: ID does not exist" containerID="63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.564391 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d"} err="failed to get container status \"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d\": rpc error: code = NotFound desc = could not find container \"63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d\": container with ID starting with 63a5be4023fe569ccc67a69c2e45ca8133ff95c6e88552ee597c6cd0c83b196d not found: ID does not exist" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.564487 4902 scope.go:117] "RemoveContainer" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" Jan 21 17:05:33 crc kubenswrapper[4902]: E0121 17:05:33.564995 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c\": container with ID starting with 1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c not found: ID does not exist" containerID="1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.565028 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c"} err="failed to get container status \"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c\": rpc error: code = NotFound desc = could not find container \"1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c\": container with ID starting with 1f3efc656ab4656ffa053c25b3d893491d881c76787d6d364b4db9e66b04a29c not found: ID does not exist" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.565076 4902 scope.go:117] "RemoveContainer" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" Jan 21 17:05:33 crc kubenswrapper[4902]: E0121 17:05:33.565333 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2\": container with ID starting with bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2 not found: ID does not exist" containerID="bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2" Jan 21 17:05:33 crc kubenswrapper[4902]: I0121 17:05:33.565364 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2"} err="failed to get container status \"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2\": rpc error: code = NotFound desc = could not find container \"bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2\": container with ID starting with bc28cde46ec2d229ab8038d1fec4ccf2c8da52c2f57eef6129c339d64d23e9d2 not found: ID does not exist" Jan 21 17:05:34 crc kubenswrapper[4902]: I0121 17:05:34.306859 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" path="/var/lib/kubelet/pods/ff52a854-5102-46f9-9d63-f3c3db18aab6/volumes" Jan 21 17:05:38 crc kubenswrapper[4902]: I0121 17:05:38.309930 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:38 crc kubenswrapper[4902]: E0121 17:05:38.310692 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:05:52 crc kubenswrapper[4902]: I0121 17:05:52.295808 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:05:52 crc kubenswrapper[4902]: E0121 17:05:52.296500 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:03 crc kubenswrapper[4902]: I0121 17:06:03.295557 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:03 crc kubenswrapper[4902]: E0121 17:06:03.296577 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:16 crc kubenswrapper[4902]: I0121 17:06:16.294781 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:16 crc kubenswrapper[4902]: E0121 17:06:16.296566 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:27 crc kubenswrapper[4902]: I0121 17:06:27.297278 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:27 crc kubenswrapper[4902]: E0121 17:06:27.299576 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:41 crc kubenswrapper[4902]: I0121 17:06:41.295373 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:41 crc kubenswrapper[4902]: E0121 17:06:41.296481 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:06:55 crc kubenswrapper[4902]: I0121 17:06:55.294798 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:06:55 crc kubenswrapper[4902]: E0121 17:06:55.295782 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:07:06 crc kubenswrapper[4902]: I0121 17:07:06.295380 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:07:06 crc kubenswrapper[4902]: E0121 17:07:06.296988 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:07:21 crc kubenswrapper[4902]: I0121 17:07:21.294967 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:07:22 crc kubenswrapper[4902]: I0121 17:07:22.628791 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b"} Jan 21 17:09:47 crc kubenswrapper[4902]: I0121 17:09:47.769896 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:09:47 crc kubenswrapper[4902]: I0121 17:09:47.770581 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.856601 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:13 crc kubenswrapper[4902]: E0121 17:10:13.857831 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.857844 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" Jan 21 17:10:13 crc kubenswrapper[4902]: E0121 17:10:13.857863 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-utilities" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.857869 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-utilities" Jan 21 17:10:13 crc kubenswrapper[4902]: E0121 17:10:13.857875 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-content" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.857881 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="extract-content" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.858112 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff52a854-5102-46f9-9d63-f3c3db18aab6" containerName="registry-server" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.860116 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.873990 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.927304 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.927362 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:13 crc kubenswrapper[4902]: I0121 17:10:13.927701 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.029579 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.030024 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.030066 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.030540 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.031639 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.066915 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"redhat-marketplace-xrptr\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.247486 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:14 crc kubenswrapper[4902]: I0121 17:10:14.868423 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.698394 4902 generic.go:334] "Generic (PLEG): container finished" podID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" exitCode=0 Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.698579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c"} Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.698890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerStarted","Data":"5fd788724e971f3bd232e168dba85b05b4b9912e82e5a1ffb45644da3784ff0f"} Jan 21 17:10:15 crc kubenswrapper[4902]: I0121 17:10:15.700533 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.720763 4902 generic.go:334] "Generic (PLEG): container finished" podID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" exitCode=0 Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.721164 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3"} Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.769390 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:17 crc kubenswrapper[4902]: I0121 17:10:17.769647 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:10:18 crc kubenswrapper[4902]: I0121 17:10:18.731500 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerStarted","Data":"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb"} Jan 21 17:10:18 crc kubenswrapper[4902]: I0121 17:10:18.750949 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrptr" podStartSLOduration=3.330580668 podStartE2EDuration="5.750932857s" podCreationTimestamp="2026-01-21 17:10:13 +0000 UTC" firstStartedPulling="2026-01-21 17:10:15.700350694 +0000 UTC m=+9377.777183723" lastFinishedPulling="2026-01-21 17:10:18.120702883 +0000 UTC m=+9380.197535912" observedRunningTime="2026-01-21 17:10:18.74997631 +0000 UTC m=+9380.826809339" watchObservedRunningTime="2026-01-21 17:10:18.750932857 +0000 UTC m=+9380.827765886" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.248580 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.248987 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.315359 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.880283 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:24 crc kubenswrapper[4902]: I0121 17:10:24.939073 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:26 crc kubenswrapper[4902]: I0121 17:10:26.836662 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrptr" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" containerID="cri-o://c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" gracePeriod=2 Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.608921 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.760970 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") pod \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.761063 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") pod \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.761266 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") pod \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\" (UID: \"46dc05bc-4996-4bc3-8dc1-dd22a85dca93\") " Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.762847 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities" (OuterVolumeSpecName: "utilities") pod "46dc05bc-4996-4bc3-8dc1-dd22a85dca93" (UID: "46dc05bc-4996-4bc3-8dc1-dd22a85dca93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.769251 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr" (OuterVolumeSpecName: "kube-api-access-6wpmr") pod "46dc05bc-4996-4bc3-8dc1-dd22a85dca93" (UID: "46dc05bc-4996-4bc3-8dc1-dd22a85dca93"). InnerVolumeSpecName "kube-api-access-6wpmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.793497 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46dc05bc-4996-4bc3-8dc1-dd22a85dca93" (UID: "46dc05bc-4996-4bc3-8dc1-dd22a85dca93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849857 4902 generic.go:334] "Generic (PLEG): container finished" podID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" exitCode=0 Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849924 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb"} Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849974 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrptr" event={"ID":"46dc05bc-4996-4bc3-8dc1-dd22a85dca93","Type":"ContainerDied","Data":"5fd788724e971f3bd232e168dba85b05b4b9912e82e5a1ffb45644da3784ff0f"} Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.849996 4902 scope.go:117] "RemoveContainer" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.850258 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrptr" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.864007 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.864276 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wpmr\" (UniqueName: \"kubernetes.io/projected/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-kube-api-access-6wpmr\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.864420 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dc05bc-4996-4bc3-8dc1-dd22a85dca93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.877140 4902 scope.go:117] "RemoveContainer" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.902993 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.910479 4902 scope.go:117] "RemoveContainer" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.914725 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrptr"] Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.978414 4902 scope.go:117] "RemoveContainer" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" Jan 21 17:10:27 crc kubenswrapper[4902]: E0121 17:10:27.983175 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb\": container with ID starting with c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb not found: ID does not exist" containerID="c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.983224 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb"} err="failed to get container status \"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb\": rpc error: code = NotFound desc = could not find container \"c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb\": container with ID starting with c73c767d96079ba5614348689ea42417796c607c28fd6aafc250fbbb69b08abb not found: ID does not exist" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.983256 4902 scope.go:117] "RemoveContainer" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" Jan 21 17:10:27 crc kubenswrapper[4902]: E0121 17:10:27.984120 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3\": container with ID starting with 2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3 not found: ID does not exist" containerID="2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.984139 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3"} err="failed to get container status \"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3\": rpc error: code = NotFound desc = could not find container \"2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3\": container with ID starting with 2b284c349a5f6c29e6c1c37d364dd66df7ab429bc257dabb8dfa27f40ed79bc3 not found: ID does not exist" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.984153 4902 scope.go:117] "RemoveContainer" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" Jan 21 17:10:27 crc kubenswrapper[4902]: E0121 17:10:27.985508 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c\": container with ID starting with 5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c not found: ID does not exist" containerID="5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c" Jan 21 17:10:27 crc kubenswrapper[4902]: I0121 17:10:27.985530 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c"} err="failed to get container status \"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c\": rpc error: code = NotFound desc = could not find container \"5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c\": container with ID starting with 5f53ab1a513e4038c92eb57d149299ec1c118002ffa90fea235e036be3465e7c not found: ID does not exist" Jan 21 17:10:28 crc kubenswrapper[4902]: I0121 17:10:28.308536 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" path="/var/lib/kubelet/pods/46dc05bc-4996-4bc3-8dc1-dd22a85dca93/volumes" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.769774 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.771359 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.771497 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.772416 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:10:47 crc kubenswrapper[4902]: I0121 17:10:47.772558 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b" gracePeriod=600 Jan 21 17:10:48 crc kubenswrapper[4902]: I0121 17:10:48.061876 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b" exitCode=0 Jan 21 17:10:48 crc kubenswrapper[4902]: I0121 17:10:48.061927 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b"} Jan 21 17:10:48 crc kubenswrapper[4902]: I0121 17:10:48.062252 4902 scope.go:117] "RemoveContainer" containerID="f37134a852307d2920630d2730d2ce1293ec3a77ac91e883c51efcbd41ae94cd" Jan 21 17:10:49 crc kubenswrapper[4902]: I0121 17:10:49.072789 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1"} Jan 21 17:13:17 crc kubenswrapper[4902]: I0121 17:13:17.770395 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:13:17 crc kubenswrapper[4902]: I0121 17:13:17.771204 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:13:47 crc kubenswrapper[4902]: I0121 17:13:47.770004 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:13:47 crc kubenswrapper[4902]: I0121 17:13:47.770521 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.893148 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:01 crc kubenswrapper[4902]: E0121 17:14:01.894424 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894448 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" Jan 21 17:14:01 crc kubenswrapper[4902]: E0121 17:14:01.894480 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-utilities" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894491 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-utilities" Jan 21 17:14:01 crc kubenswrapper[4902]: E0121 17:14:01.894544 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-content" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894557 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="extract-content" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.894883 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dc05bc-4996-4bc3-8dc1-dd22a85dca93" containerName="registry-server" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.898094 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:01 crc kubenswrapper[4902]: I0121 17:14:01.904230 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.039366 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.039575 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.039898 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142395 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142585 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142753 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.142896 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.143385 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.167956 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"certified-operators-594ns\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.227680 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:02 crc kubenswrapper[4902]: I0121 17:14:02.901934 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:03 crc kubenswrapper[4902]: I0121 17:14:03.550574 4902 generic.go:334] "Generic (PLEG): container finished" podID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" exitCode=0 Jan 21 17:14:03 crc kubenswrapper[4902]: I0121 17:14:03.550811 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4"} Jan 21 17:14:03 crc kubenswrapper[4902]: I0121 17:14:03.550832 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerStarted","Data":"ce31d363464120b5424072d2a0c77ed3f84a85f51a0782a1b19cce1c51bb85b0"} Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.099323 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.103890 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.130148 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.191970 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.192247 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.192363 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.294810 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.294904 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.294933 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.295628 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.296166 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.315363 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"community-operators-c5vrg\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.431278 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:04 crc kubenswrapper[4902]: I0121 17:14:04.595851 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerStarted","Data":"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340"} Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.006173 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:05 crc kubenswrapper[4902]: W0121 17:14:05.015190 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670e29d4_f2fe_4d3d_be51_61fa2dc71666.slice/crio-97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8 WatchSource:0}: Error finding container 97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8: Status 404 returned error can't find the container with id 97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8 Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.605871 4902 generic.go:334] "Generic (PLEG): container finished" podID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" exitCode=0 Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.606161 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84"} Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.606188 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerStarted","Data":"97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8"} Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.611815 4902 generic.go:334] "Generic (PLEG): container finished" podID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" exitCode=0 Jan 21 17:14:05 crc kubenswrapper[4902]: I0121 17:14:05.611847 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340"} Jan 21 17:14:06 crc kubenswrapper[4902]: I0121 17:14:06.627160 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerStarted","Data":"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347"} Jan 21 17:14:06 crc kubenswrapper[4902]: I0121 17:14:06.638275 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerStarted","Data":"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c"} Jan 21 17:14:06 crc kubenswrapper[4902]: I0121 17:14:06.655673 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-594ns" podStartSLOduration=3.140535176 podStartE2EDuration="5.655651207s" podCreationTimestamp="2026-01-21 17:14:01 +0000 UTC" firstStartedPulling="2026-01-21 17:14:03.552373978 +0000 UTC m=+9605.629207007" lastFinishedPulling="2026-01-21 17:14:06.067490009 +0000 UTC m=+9608.144323038" observedRunningTime="2026-01-21 17:14:06.650602365 +0000 UTC m=+9608.727435394" watchObservedRunningTime="2026-01-21 17:14:06.655651207 +0000 UTC m=+9608.732484246" Jan 21 17:14:07 crc kubenswrapper[4902]: I0121 17:14:07.653449 4902 generic.go:334] "Generic (PLEG): container finished" podID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" exitCode=0 Jan 21 17:14:07 crc kubenswrapper[4902]: I0121 17:14:07.653890 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c"} Jan 21 17:14:08 crc kubenswrapper[4902]: I0121 17:14:08.675092 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerStarted","Data":"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6"} Jan 21 17:14:08 crc kubenswrapper[4902]: I0121 17:14:08.694726 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c5vrg" podStartSLOduration=2.253752409 podStartE2EDuration="4.694706589s" podCreationTimestamp="2026-01-21 17:14:04 +0000 UTC" firstStartedPulling="2026-01-21 17:14:05.61054326 +0000 UTC m=+9607.687376289" lastFinishedPulling="2026-01-21 17:14:08.05149744 +0000 UTC m=+9610.128330469" observedRunningTime="2026-01-21 17:14:08.69259867 +0000 UTC m=+9610.769431699" watchObservedRunningTime="2026-01-21 17:14:08.694706589 +0000 UTC m=+9610.771539628" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.228176 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.228765 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.305996 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:12 crc kubenswrapper[4902]: I0121 17:14:12.770715 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:13 crc kubenswrapper[4902]: I0121 17:14:13.679944 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.431464 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.432599 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.498377 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.735813 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-594ns" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" containerID="cri-o://2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" gracePeriod=2 Jan 21 17:14:14 crc kubenswrapper[4902]: I0121 17:14:14.791616 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.245063 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.372636 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") pod \"e00bd4cb-eeca-472b-a935-c33859f82a60\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.372870 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") pod \"e00bd4cb-eeca-472b-a935-c33859f82a60\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.372910 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") pod \"e00bd4cb-eeca-472b-a935-c33859f82a60\" (UID: \"e00bd4cb-eeca-472b-a935-c33859f82a60\") " Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.373772 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities" (OuterVolumeSpecName: "utilities") pod "e00bd4cb-eeca-472b-a935-c33859f82a60" (UID: "e00bd4cb-eeca-472b-a935-c33859f82a60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.377978 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25" (OuterVolumeSpecName: "kube-api-access-fwq25") pod "e00bd4cb-eeca-472b-a935-c33859f82a60" (UID: "e00bd4cb-eeca-472b-a935-c33859f82a60"). InnerVolumeSpecName "kube-api-access-fwq25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.424784 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e00bd4cb-eeca-472b-a935-c33859f82a60" (UID: "e00bd4cb-eeca-472b-a935-c33859f82a60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.476206 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwq25\" (UniqueName: \"kubernetes.io/projected/e00bd4cb-eeca-472b-a935-c33859f82a60-kube-api-access-fwq25\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.476243 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.476294 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00bd4cb-eeca-472b-a935-c33859f82a60-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747018 4902 generic.go:334] "Generic (PLEG): container finished" podID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" exitCode=0 Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747143 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-594ns" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747138 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347"} Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747271 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-594ns" event={"ID":"e00bd4cb-eeca-472b-a935-c33859f82a60","Type":"ContainerDied","Data":"ce31d363464120b5424072d2a0c77ed3f84a85f51a0782a1b19cce1c51bb85b0"} Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.747317 4902 scope.go:117] "RemoveContainer" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.767723 4902 scope.go:117] "RemoveContainer" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.785072 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.797206 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-594ns"] Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.813451 4902 scope.go:117] "RemoveContainer" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.867172 4902 scope.go:117] "RemoveContainer" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" Jan 21 17:14:15 crc kubenswrapper[4902]: E0121 17:14:15.867625 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347\": container with ID starting with 2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347 not found: ID does not exist" containerID="2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.867679 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347"} err="failed to get container status \"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347\": rpc error: code = NotFound desc = could not find container \"2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347\": container with ID starting with 2e94cbd09f43192a5ef5aae0b71ef946c1c68d5b80f2ed4fe9e8b6ff35b90347 not found: ID does not exist" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.867716 4902 scope.go:117] "RemoveContainer" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" Jan 21 17:14:15 crc kubenswrapper[4902]: E0121 17:14:15.868024 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340\": container with ID starting with 7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340 not found: ID does not exist" containerID="7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.868100 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340"} err="failed to get container status \"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340\": rpc error: code = NotFound desc = could not find container \"7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340\": container with ID starting with 7955b44168e0fd9457897d2aba6c4bc74d0803c2ce4a3e4315838b80494ee340 not found: ID does not exist" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.868127 4902 scope.go:117] "RemoveContainer" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" Jan 21 17:14:15 crc kubenswrapper[4902]: E0121 17:14:15.868343 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4\": container with ID starting with ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4 not found: ID does not exist" containerID="ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4" Jan 21 17:14:15 crc kubenswrapper[4902]: I0121 17:14:15.868370 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4"} err="failed to get container status \"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4\": rpc error: code = NotFound desc = could not find container \"ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4\": container with ID starting with ff1a11196f0be5d82d9d764421c88d666cb7dec161fe9c65dbcc82e7d1635ea4 not found: ID does not exist" Jan 21 17:14:16 crc kubenswrapper[4902]: I0121 17:14:16.319093 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" path="/var/lib/kubelet/pods/e00bd4cb-eeca-472b-a935-c33859f82a60/volumes" Jan 21 17:14:16 crc kubenswrapper[4902]: I0121 17:14:16.682978 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.778848 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.779222 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.779278 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.780918 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.781011 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" gracePeriod=600 Jan 21 17:14:17 crc kubenswrapper[4902]: I0121 17:14:17.795539 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c5vrg" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" containerID="cri-o://61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" gracePeriod=2 Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.422612 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.774487 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.818969 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" exitCode=0 Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.819030 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1"} Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.819077 4902 scope.go:117] "RemoveContainer" containerID="df1e3c29d75db17b6274424fd93ca7c5fe90dd1bfa5747bd7c348f540e868b0b" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.819842 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.820228 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826226 4902 generic.go:334] "Generic (PLEG): container finished" podID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" exitCode=0 Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826254 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6"} Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5vrg" event={"ID":"670e29d4-f2fe-4d3d-be51-61fa2dc71666","Type":"ContainerDied","Data":"97709cf162310f5fb973c5cad674496bec1c683ab5d0fb7ba1ec778e7b52dcd8"} Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.826319 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5vrg" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.850027 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") pod \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.850267 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") pod \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.850627 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") pod \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\" (UID: \"670e29d4-f2fe-4d3d-be51-61fa2dc71666\") " Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.851722 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities" (OuterVolumeSpecName: "utilities") pod "670e29d4-f2fe-4d3d-be51-61fa2dc71666" (UID: "670e29d4-f2fe-4d3d-be51-61fa2dc71666"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.863591 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl" (OuterVolumeSpecName: "kube-api-access-fdbsl") pod "670e29d4-f2fe-4d3d-be51-61fa2dc71666" (UID: "670e29d4-f2fe-4d3d-be51-61fa2dc71666"). InnerVolumeSpecName "kube-api-access-fdbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.877798 4902 scope.go:117] "RemoveContainer" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.922107 4902 scope.go:117] "RemoveContainer" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.941514 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "670e29d4-f2fe-4d3d-be51-61fa2dc71666" (UID: "670e29d4-f2fe-4d3d-be51-61fa2dc71666"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.944230 4902 scope.go:117] "RemoveContainer" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.953701 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.953725 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/670e29d4-f2fe-4d3d-be51-61fa2dc71666-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.953736 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbsl\" (UniqueName: \"kubernetes.io/projected/670e29d4-f2fe-4d3d-be51-61fa2dc71666-kube-api-access-fdbsl\") on node \"crc\" DevicePath \"\"" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.984803 4902 scope.go:117] "RemoveContainer" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.985349 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6\": container with ID starting with 61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6 not found: ID does not exist" containerID="61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985411 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6"} err="failed to get container status \"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6\": rpc error: code = NotFound desc = could not find container \"61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6\": container with ID starting with 61835ff4cdbf6e2035856e0b7d9890f183e0569e73616b6f57fd963157c6fef6 not found: ID does not exist" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985443 4902 scope.go:117] "RemoveContainer" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.985911 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c\": container with ID starting with c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c not found: ID does not exist" containerID="c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985941 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c"} err="failed to get container status \"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c\": rpc error: code = NotFound desc = could not find container \"c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c\": container with ID starting with c5cb1c68d424ef9a16252edab44341e6b10d4d4cffb018f9ae25dc4d78655a3c not found: ID does not exist" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.985959 4902 scope.go:117] "RemoveContainer" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" Jan 21 17:14:18 crc kubenswrapper[4902]: E0121 17:14:18.986307 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84\": container with ID starting with 74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84 not found: ID does not exist" containerID="74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84" Jan 21 17:14:18 crc kubenswrapper[4902]: I0121 17:14:18.986349 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84"} err="failed to get container status \"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84\": rpc error: code = NotFound desc = could not find container \"74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84\": container with ID starting with 74213e6a6c4851868f7d2dc49d4dfe0de8491fb0ab0fd9ab28a64bff22dcfd84 not found: ID does not exist" Jan 21 17:14:19 crc kubenswrapper[4902]: I0121 17:14:19.176530 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:19 crc kubenswrapper[4902]: I0121 17:14:19.187931 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c5vrg"] Jan 21 17:14:20 crc kubenswrapper[4902]: I0121 17:14:20.312825 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" path="/var/lib/kubelet/pods/670e29d4-f2fe-4d3d-be51-61fa2dc71666/volumes" Jan 21 17:14:33 crc kubenswrapper[4902]: I0121 17:14:33.296082 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:33 crc kubenswrapper[4902]: E0121 17:14:33.297559 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:44 crc kubenswrapper[4902]: I0121 17:14:44.295921 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:44 crc kubenswrapper[4902]: E0121 17:14:44.297023 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:14:57 crc kubenswrapper[4902]: I0121 17:14:57.295806 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:14:57 crc kubenswrapper[4902]: E0121 17:14:57.296516 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.161895 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4"] Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164117 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164232 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164323 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164416 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-utilities" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164493 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164564 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164650 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164730 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164806 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.164883 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: E0121 17:15:00.164988 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.165067 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="extract-content" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.165332 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="670e29d4-f2fe-4d3d-be51-61fa2dc71666" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.165439 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00bd4cb-eeca-472b-a935-c33859f82a60" containerName="registry-server" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.166345 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.173490 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.174496 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.176342 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4"] Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.271319 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.271930 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.272057 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.374027 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.374731 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.375749 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.378089 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.612373 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.612485 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"collect-profiles-29483595-694q4\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:00 crc kubenswrapper[4902]: I0121 17:15:00.799326 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:01 crc kubenswrapper[4902]: I0121 17:15:01.286543 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4"] Jan 21 17:15:01 crc kubenswrapper[4902]: I0121 17:15:01.296907 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" event={"ID":"67c24b2b-2c72-4d64-b26d-00f594c5c656","Type":"ContainerStarted","Data":"c805a1433d193153366f8c1693f631697a32b49eeb06fb692a163f2de93d7135"} Jan 21 17:15:02 crc kubenswrapper[4902]: I0121 17:15:02.310704 4902 generic.go:334] "Generic (PLEG): container finished" podID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerID="b311c7ab0782213c3bd1384e2339658ebe74e6b6859aa04130a324203f27a684" exitCode=0 Jan 21 17:15:02 crc kubenswrapper[4902]: I0121 17:15:02.315522 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" event={"ID":"67c24b2b-2c72-4d64-b26d-00f594c5c656","Type":"ContainerDied","Data":"b311c7ab0782213c3bd1384e2339658ebe74e6b6859aa04130a324203f27a684"} Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.717732 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.750657 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") pod \"67c24b2b-2c72-4d64-b26d-00f594c5c656\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.750718 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") pod \"67c24b2b-2c72-4d64-b26d-00f594c5c656\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.750774 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") pod \"67c24b2b-2c72-4d64-b26d-00f594c5c656\" (UID: \"67c24b2b-2c72-4d64-b26d-00f594c5c656\") " Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.752068 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume" (OuterVolumeSpecName: "config-volume") pod "67c24b2b-2c72-4d64-b26d-00f594c5c656" (UID: "67c24b2b-2c72-4d64-b26d-00f594c5c656"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.773716 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67c24b2b-2c72-4d64-b26d-00f594c5c656" (UID: "67c24b2b-2c72-4d64-b26d-00f594c5c656"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.773881 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5" (OuterVolumeSpecName: "kube-api-access-2gdn5") pod "67c24b2b-2c72-4d64-b26d-00f594c5c656" (UID: "67c24b2b-2c72-4d64-b26d-00f594c5c656"). InnerVolumeSpecName "kube-api-access-2gdn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.853124 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gdn5\" (UniqueName: \"kubernetes.io/projected/67c24b2b-2c72-4d64-b26d-00f594c5c656-kube-api-access-2gdn5\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.853160 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c24b2b-2c72-4d64-b26d-00f594c5c656-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4902]: I0121 17:15:03.853171 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67c24b2b-2c72-4d64-b26d-00f594c5c656-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.345953 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" event={"ID":"67c24b2b-2c72-4d64-b26d-00f594c5c656","Type":"ContainerDied","Data":"c805a1433d193153366f8c1693f631697a32b49eeb06fb692a163f2de93d7135"} Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.346925 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c805a1433d193153366f8c1693f631697a32b49eeb06fb692a163f2de93d7135" Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.347408 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-694q4" Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.815720 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 17:15:04 crc kubenswrapper[4902]: I0121 17:15:04.827250 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-vz8jf"] Jan 21 17:15:06 crc kubenswrapper[4902]: I0121 17:15:06.320667 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8598a357-73ed-4850-bbd3-ce46d3d9a623" path="/var/lib/kubelet/pods/8598a357-73ed-4850-bbd3-ce46d3d9a623/volumes" Jan 21 17:15:10 crc kubenswrapper[4902]: I0121 17:15:10.295442 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:10 crc kubenswrapper[4902]: E0121 17:15:10.296194 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:24 crc kubenswrapper[4902]: I0121 17:15:24.298376 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:24 crc kubenswrapper[4902]: E0121 17:15:24.299468 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:37 crc kubenswrapper[4902]: I0121 17:15:37.351124 4902 scope.go:117] "RemoveContainer" containerID="1150c7694232d9425d7e1595d33c3ffaecb94a439744ef680974e317c8ea6ae2" Jan 21 17:15:39 crc kubenswrapper[4902]: I0121 17:15:39.295676 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:39 crc kubenswrapper[4902]: E0121 17:15:39.296191 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.325425 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:15:42 crc kubenswrapper[4902]: E0121 17:15:42.326584 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerName="collect-profiles" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.326607 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerName="collect-profiles" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.326995 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c24b2b-2c72-4d64-b26d-00f594c5c656" containerName="collect-profiles" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.329891 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.386729 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.405971 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.406278 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.406408 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.509333 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.509479 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.509533 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.510272 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.510280 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.533671 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"redhat-operators-rvs44\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:42 crc kubenswrapper[4902]: I0121 17:15:42.715996 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:43 crc kubenswrapper[4902]: I0121 17:15:43.227369 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:15:43 crc kubenswrapper[4902]: W0121 17:15:43.625881 4902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dec7b84_10f0_4e8b_a421_1ecf8e9f4f20.slice/crio-f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6 WatchSource:0}: Error finding container f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6: Status 404 returned error can't find the container with id f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6 Jan 21 17:15:43 crc kubenswrapper[4902]: I0121 17:15:43.854412 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerStarted","Data":"f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6"} Jan 21 17:15:44 crc kubenswrapper[4902]: I0121 17:15:44.865788 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" exitCode=0 Jan 21 17:15:44 crc kubenswrapper[4902]: I0121 17:15:44.865827 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f"} Jan 21 17:15:44 crc kubenswrapper[4902]: I0121 17:15:44.868218 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:15:46 crc kubenswrapper[4902]: I0121 17:15:46.895401 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerStarted","Data":"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5"} Jan 21 17:15:46 crc kubenswrapper[4902]: E0121 17:15:46.964660 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dec7b84_10f0_4e8b_a421_1ecf8e9f4f20.slice/crio-0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:15:49 crc kubenswrapper[4902]: I0121 17:15:49.941823 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" exitCode=0 Jan 21 17:15:49 crc kubenswrapper[4902]: I0121 17:15:49.942273 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5"} Jan 21 17:15:50 crc kubenswrapper[4902]: I0121 17:15:50.956503 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerStarted","Data":"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd"} Jan 21 17:15:50 crc kubenswrapper[4902]: I0121 17:15:50.986641 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvs44" podStartSLOduration=3.418622988 podStartE2EDuration="8.986620337s" podCreationTimestamp="2026-01-21 17:15:42 +0000 UTC" firstStartedPulling="2026-01-21 17:15:44.868009419 +0000 UTC m=+9706.944842448" lastFinishedPulling="2026-01-21 17:15:50.436006758 +0000 UTC m=+9712.512839797" observedRunningTime="2026-01-21 17:15:50.985027582 +0000 UTC m=+9713.061860651" watchObservedRunningTime="2026-01-21 17:15:50.986620337 +0000 UTC m=+9713.063453356" Jan 21 17:15:52 crc kubenswrapper[4902]: I0121 17:15:52.716661 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:52 crc kubenswrapper[4902]: I0121 17:15:52.716981 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:15:53 crc kubenswrapper[4902]: I0121 17:15:53.294822 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:15:53 crc kubenswrapper[4902]: E0121 17:15:53.295178 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:15:53 crc kubenswrapper[4902]: I0121 17:15:53.785497 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvs44" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" probeResult="failure" output=< Jan 21 17:15:53 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 17:15:53 crc kubenswrapper[4902]: > Jan 21 17:16:02 crc kubenswrapper[4902]: I0121 17:16:02.775655 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:02 crc kubenswrapper[4902]: I0121 17:16:02.839109 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:03 crc kubenswrapper[4902]: I0121 17:16:03.029112 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.127994 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvs44" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" containerID="cri-o://7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" gracePeriod=2 Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.616300 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739173 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") pod \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739396 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") pod \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739492 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") pod \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\" (UID: \"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20\") " Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.739894 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities" (OuterVolumeSpecName: "utilities") pod "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" (UID: "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.740493 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.745293 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp" (OuterVolumeSpecName: "kube-api-access-6jthp") pod "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" (UID: "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20"). InnerVolumeSpecName "kube-api-access-6jthp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.842097 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jthp\" (UniqueName: \"kubernetes.io/projected/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-kube-api-access-6jthp\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.871219 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" (UID: "9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:16:04 crc kubenswrapper[4902]: I0121 17:16:04.944476 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141511 4902 generic.go:334] "Generic (PLEG): container finished" podID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" exitCode=0 Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141582 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvs44" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141596 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd"} Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141841 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvs44" event={"ID":"9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20","Type":"ContainerDied","Data":"f58bcf3190a13953897768469416feb9d118e469c63386293d2490eae70ed3b6"} Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.141890 4902 scope.go:117] "RemoveContainer" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.175480 4902 scope.go:117] "RemoveContainer" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.210637 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.211196 4902 scope.go:117] "RemoveContainer" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.224651 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvs44"] Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.254442 4902 scope.go:117] "RemoveContainer" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" Jan 21 17:16:05 crc kubenswrapper[4902]: E0121 17:16:05.254995 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd\": container with ID starting with 7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd not found: ID does not exist" containerID="7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255025 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd"} err="failed to get container status \"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd\": rpc error: code = NotFound desc = could not find container \"7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd\": container with ID starting with 7712af748295b69d7fb0a2b42607f45ef4699730420521a8d63a69dd16a081fd not found: ID does not exist" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255089 4902 scope.go:117] "RemoveContainer" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" Jan 21 17:16:05 crc kubenswrapper[4902]: E0121 17:16:05.255780 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5\": container with ID starting with 0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5 not found: ID does not exist" containerID="0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255832 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5"} err="failed to get container status \"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5\": rpc error: code = NotFound desc = could not find container \"0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5\": container with ID starting with 0a5b4072de6ecac8b8e0e68a1fd2c39096843a835db49a69730566c30990cef5 not found: ID does not exist" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.255866 4902 scope.go:117] "RemoveContainer" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" Jan 21 17:16:05 crc kubenswrapper[4902]: E0121 17:16:05.257866 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f\": container with ID starting with 0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f not found: ID does not exist" containerID="0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f" Jan 21 17:16:05 crc kubenswrapper[4902]: I0121 17:16:05.258402 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f"} err="failed to get container status \"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f\": rpc error: code = NotFound desc = could not find container \"0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f\": container with ID starting with 0e23a41ee716f884d9a85c55bc9de17dd1e34c3c252f673cf0241a783856f12f not found: ID does not exist" Jan 21 17:16:06 crc kubenswrapper[4902]: I0121 17:16:06.318108 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" path="/var/lib/kubelet/pods/9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20/volumes" Jan 21 17:16:08 crc kubenswrapper[4902]: I0121 17:16:08.313697 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:08 crc kubenswrapper[4902]: E0121 17:16:08.316802 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:19 crc kubenswrapper[4902]: I0121 17:16:19.298360 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:19 crc kubenswrapper[4902]: E0121 17:16:19.299786 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:32 crc kubenswrapper[4902]: I0121 17:16:32.296121 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:32 crc kubenswrapper[4902]: E0121 17:16:32.297662 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:46 crc kubenswrapper[4902]: I0121 17:16:46.295308 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:46 crc kubenswrapper[4902]: E0121 17:16:46.296289 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:16:58 crc kubenswrapper[4902]: I0121 17:16:58.301612 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:16:58 crc kubenswrapper[4902]: E0121 17:16:58.302498 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:09 crc kubenswrapper[4902]: I0121 17:17:09.294889 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:09 crc kubenswrapper[4902]: E0121 17:17:09.295961 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:21 crc kubenswrapper[4902]: I0121 17:17:21.296884 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:21 crc kubenswrapper[4902]: E0121 17:17:21.297871 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:32 crc kubenswrapper[4902]: I0121 17:17:32.303288 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:32 crc kubenswrapper[4902]: E0121 17:17:32.304333 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:45 crc kubenswrapper[4902]: I0121 17:17:45.295295 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:45 crc kubenswrapper[4902]: E0121 17:17:45.296192 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:17:57 crc kubenswrapper[4902]: I0121 17:17:57.295467 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:17:57 crc kubenswrapper[4902]: E0121 17:17:57.296123 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:12 crc kubenswrapper[4902]: I0121 17:18:12.295164 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:12 crc kubenswrapper[4902]: E0121 17:18:12.295874 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:24 crc kubenswrapper[4902]: I0121 17:18:24.295433 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:24 crc kubenswrapper[4902]: E0121 17:18:24.296337 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:37 crc kubenswrapper[4902]: I0121 17:18:37.306025 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:37 crc kubenswrapper[4902]: E0121 17:18:37.306887 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:18:48 crc kubenswrapper[4902]: I0121 17:18:48.313084 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:18:48 crc kubenswrapper[4902]: E0121 17:18:48.314564 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:19:01 crc kubenswrapper[4902]: I0121 17:19:01.294962 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:19:01 crc kubenswrapper[4902]: E0121 17:19:01.295606 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:19:12 crc kubenswrapper[4902]: I0121 17:19:12.295008 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:19:12 crc kubenswrapper[4902]: E0121 17:19:12.295763 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:19:25 crc kubenswrapper[4902]: I0121 17:19:25.296178 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:19:26 crc kubenswrapper[4902]: I0121 17:19:26.548465 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1"} Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.180467 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:01 crc kubenswrapper[4902]: E0121 17:21:01.181678 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.181692 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" Jan 21 17:21:01 crc kubenswrapper[4902]: E0121 17:21:01.181704 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-utilities" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.181714 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-utilities" Jan 21 17:21:01 crc kubenswrapper[4902]: E0121 17:21:01.181738 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-content" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.181745 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="extract-content" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.182023 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dec7b84-10f0-4e8b-a421-1ecf8e9f4f20" containerName="registry-server" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.183904 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.200246 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.341461 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.341767 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.341802 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444170 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444329 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444368 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444812 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.444957 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.466192 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"redhat-marketplace-6frl2\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:01 crc kubenswrapper[4902]: I0121 17:21:01.503768 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.027558 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.636178 4902 generic.go:334] "Generic (PLEG): container finished" podID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" exitCode=0 Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.636330 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7"} Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.636579 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerStarted","Data":"69d9d18b10b5abdd62316b10a1673d000d4b4df0964929f0f8317380d299210f"} Jan 21 17:21:02 crc kubenswrapper[4902]: I0121 17:21:02.639255 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:21:03 crc kubenswrapper[4902]: I0121 17:21:03.649366 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerStarted","Data":"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481"} Jan 21 17:21:04 crc kubenswrapper[4902]: I0121 17:21:04.670133 4902 generic.go:334] "Generic (PLEG): container finished" podID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" exitCode=0 Jan 21 17:21:04 crc kubenswrapper[4902]: I0121 17:21:04.670539 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481"} Jan 21 17:21:05 crc kubenswrapper[4902]: I0121 17:21:05.686442 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerStarted","Data":"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be"} Jan 21 17:21:05 crc kubenswrapper[4902]: I0121 17:21:05.713489 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6frl2" podStartSLOduration=2.142636655 podStartE2EDuration="4.713472887s" podCreationTimestamp="2026-01-21 17:21:01 +0000 UTC" firstStartedPulling="2026-01-21 17:21:02.638914028 +0000 UTC m=+10024.715747057" lastFinishedPulling="2026-01-21 17:21:05.20975023 +0000 UTC m=+10027.286583289" observedRunningTime="2026-01-21 17:21:05.70719768 +0000 UTC m=+10027.784030709" watchObservedRunningTime="2026-01-21 17:21:05.713472887 +0000 UTC m=+10027.790305916" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.505573 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.506179 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.560779 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.807641 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:11 crc kubenswrapper[4902]: I0121 17:21:11.895024 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:13 crc kubenswrapper[4902]: I0121 17:21:13.766327 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6frl2" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" containerID="cri-o://fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" gracePeriod=2 Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.257995 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.429666 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") pod \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.430384 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") pod \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.430635 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") pod \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\" (UID: \"ecbac09f-37bd-4a3e-bf8c-5fe146260041\") " Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.431140 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities" (OuterVolumeSpecName: "utilities") pod "ecbac09f-37bd-4a3e-bf8c-5fe146260041" (UID: "ecbac09f-37bd-4a3e-bf8c-5fe146260041"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.432106 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.435592 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt" (OuterVolumeSpecName: "kube-api-access-f67wt") pod "ecbac09f-37bd-4a3e-bf8c-5fe146260041" (UID: "ecbac09f-37bd-4a3e-bf8c-5fe146260041"). InnerVolumeSpecName "kube-api-access-f67wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.470134 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecbac09f-37bd-4a3e-bf8c-5fe146260041" (UID: "ecbac09f-37bd-4a3e-bf8c-5fe146260041"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.539173 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbac09f-37bd-4a3e-bf8c-5fe146260041-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.539205 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67wt\" (UniqueName: \"kubernetes.io/projected/ecbac09f-37bd-4a3e-bf8c-5fe146260041-kube-api-access-f67wt\") on node \"crc\" DevicePath \"\"" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.785928 4902 generic.go:334] "Generic (PLEG): container finished" podID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" exitCode=0 Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786009 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be"} Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786148 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6frl2" event={"ID":"ecbac09f-37bd-4a3e-bf8c-5fe146260041","Type":"ContainerDied","Data":"69d9d18b10b5abdd62316b10a1673d000d4b4df0964929f0f8317380d299210f"} Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786192 4902 scope.go:117] "RemoveContainer" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.786426 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6frl2" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.847708 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.851508 4902 scope.go:117] "RemoveContainer" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.862735 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6frl2"] Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.891973 4902 scope.go:117] "RemoveContainer" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.928213 4902 scope.go:117] "RemoveContainer" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" Jan 21 17:21:14 crc kubenswrapper[4902]: E0121 17:21:14.929849 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be\": container with ID starting with fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be not found: ID does not exist" containerID="fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.929912 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be"} err="failed to get container status \"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be\": rpc error: code = NotFound desc = could not find container \"fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be\": container with ID starting with fee0b66c99e601106e365cfd2871cd324b4fda059c24fa42ed845ba3453554be not found: ID does not exist" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.929935 4902 scope.go:117] "RemoveContainer" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" Jan 21 17:21:14 crc kubenswrapper[4902]: E0121 17:21:14.934432 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481\": container with ID starting with fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481 not found: ID does not exist" containerID="fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.934466 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481"} err="failed to get container status \"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481\": rpc error: code = NotFound desc = could not find container \"fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481\": container with ID starting with fe3de5fc93197fb01fc562294f68aecd488a190d304cf7e878d1701ca21b4481 not found: ID does not exist" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.934482 4902 scope.go:117] "RemoveContainer" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" Jan 21 17:21:14 crc kubenswrapper[4902]: E0121 17:21:14.935001 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7\": container with ID starting with 7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7 not found: ID does not exist" containerID="7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7" Jan 21 17:21:14 crc kubenswrapper[4902]: I0121 17:21:14.935033 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7"} err="failed to get container status \"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7\": rpc error: code = NotFound desc = could not find container \"7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7\": container with ID starting with 7f6d4871b5cfa39a7307810ea537afda726bf16068efc742569f2d1358c166b7 not found: ID does not exist" Jan 21 17:21:16 crc kubenswrapper[4902]: I0121 17:21:16.320016 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" path="/var/lib/kubelet/pods/ecbac09f-37bd-4a3e-bf8c-5fe146260041/volumes" Jan 21 17:21:47 crc kubenswrapper[4902]: I0121 17:21:47.769830 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:21:47 crc kubenswrapper[4902]: I0121 17:21:47.770524 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:17 crc kubenswrapper[4902]: I0121 17:22:17.769513 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:17 crc kubenswrapper[4902]: I0121 17:22:17.770145 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.776633 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.777290 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.777350 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.778227 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:22:47 crc kubenswrapper[4902]: I0121 17:22:47.778306 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1" gracePeriod=600 Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.875938 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1" exitCode=0 Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.876025 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1"} Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.877448 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e"} Jan 21 17:22:48 crc kubenswrapper[4902]: I0121 17:22:48.877551 4902 scope.go:117] "RemoveContainer" containerID="1b725cd0ee29b6f8374e2f34787b61a9310a7907e3a249f15236c6402627deb1" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.965917 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:40 crc kubenswrapper[4902]: E0121 17:24:40.983208 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.983467 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" Jan 21 17:24:40 crc kubenswrapper[4902]: E0121 17:24:40.983575 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-utilities" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.983662 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-utilities" Jan 21 17:24:40 crc kubenswrapper[4902]: E0121 17:24:40.983775 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-content" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.983856 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="extract-content" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.984410 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbac09f-37bd-4a3e-bf8c-5fe146260041" containerName="registry-server" Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.988481 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:40 crc kubenswrapper[4902]: I0121 17:24:40.988781 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.024577 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.024670 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.024818 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.127743 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.128098 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.128290 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.128727 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.129332 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.150188 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"community-operators-st2m5\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.332600 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:41 crc kubenswrapper[4902]: I0121 17:24:41.890140 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:42 crc kubenswrapper[4902]: I0121 17:24:42.149609 4902 generic.go:334] "Generic (PLEG): container finished" podID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" exitCode=0 Jan 21 17:24:42 crc kubenswrapper[4902]: I0121 17:24:42.149657 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea"} Jan 21 17:24:42 crc kubenswrapper[4902]: I0121 17:24:42.149683 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerStarted","Data":"9bf11b95b15931c87e146b54d872973cb5b5c988f66b6c170a9e9e5ee1b3604f"} Jan 21 17:24:43 crc kubenswrapper[4902]: I0121 17:24:43.160746 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerStarted","Data":"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9"} Jan 21 17:24:44 crc kubenswrapper[4902]: I0121 17:24:44.188964 4902 generic.go:334] "Generic (PLEG): container finished" podID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" exitCode=0 Jan 21 17:24:44 crc kubenswrapper[4902]: I0121 17:24:44.189094 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9"} Jan 21 17:24:45 crc kubenswrapper[4902]: I0121 17:24:45.204121 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerStarted","Data":"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa"} Jan 21 17:24:45 crc kubenswrapper[4902]: I0121 17:24:45.233877 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-st2m5" podStartSLOduration=2.760754568 podStartE2EDuration="5.233854219s" podCreationTimestamp="2026-01-21 17:24:40 +0000 UTC" firstStartedPulling="2026-01-21 17:24:42.153367527 +0000 UTC m=+10244.230200546" lastFinishedPulling="2026-01-21 17:24:44.626467128 +0000 UTC m=+10246.703300197" observedRunningTime="2026-01-21 17:24:45.226958425 +0000 UTC m=+10247.303791494" watchObservedRunningTime="2026-01-21 17:24:45.233854219 +0000 UTC m=+10247.310687248" Jan 21 17:24:51 crc kubenswrapper[4902]: I0121 17:24:51.333369 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:51 crc kubenswrapper[4902]: I0121 17:24:51.333842 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:51 crc kubenswrapper[4902]: I0121 17:24:51.407745 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:52 crc kubenswrapper[4902]: I0121 17:24:52.374168 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:52 crc kubenswrapper[4902]: I0121 17:24:52.460767 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.307151 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-st2m5" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" containerID="cri-o://19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" gracePeriod=2 Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.782269 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.921998 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") pod \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.922344 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") pod \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.922440 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") pod \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\" (UID: \"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2\") " Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.924176 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities" (OuterVolumeSpecName: "utilities") pod "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" (UID: "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.931449 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh" (OuterVolumeSpecName: "kube-api-access-9blfh") pod "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" (UID: "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2"). InnerVolumeSpecName "kube-api-access-9blfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:24:54 crc kubenswrapper[4902]: I0121 17:24:54.989654 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" (UID: "0846ffa8-a7a2-47b2-b8dc-aa69b321fef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.024893 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.024922 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.024938 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9blfh\" (UniqueName: \"kubernetes.io/projected/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2-kube-api-access-9blfh\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.323230 4902 generic.go:334] "Generic (PLEG): container finished" podID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" exitCode=0 Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324330 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa"} Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324480 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-st2m5" event={"ID":"0846ffa8-a7a2-47b2-b8dc-aa69b321fef2","Type":"ContainerDied","Data":"9bf11b95b15931c87e146b54d872973cb5b5c988f66b6c170a9e9e5ee1b3604f"} Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324609 4902 scope.go:117] "RemoveContainer" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.324907 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-st2m5" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.356286 4902 scope.go:117] "RemoveContainer" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.391475 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.402280 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-st2m5"] Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.413758 4902 scope.go:117] "RemoveContainer" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.453725 4902 scope.go:117] "RemoveContainer" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" Jan 21 17:24:55 crc kubenswrapper[4902]: E0121 17:24:55.454181 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa\": container with ID starting with 19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa not found: ID does not exist" containerID="19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454250 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa"} err="failed to get container status \"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa\": rpc error: code = NotFound desc = could not find container \"19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa\": container with ID starting with 19e5ebb911b8d1015327b60d9a20e9c1bb3a107ba27f01d362f9013acce345aa not found: ID does not exist" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454283 4902 scope.go:117] "RemoveContainer" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" Jan 21 17:24:55 crc kubenswrapper[4902]: E0121 17:24:55.454606 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9\": container with ID starting with b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9 not found: ID does not exist" containerID="b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454642 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9"} err="failed to get container status \"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9\": rpc error: code = NotFound desc = could not find container \"b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9\": container with ID starting with b31fc7c57aad02620f067cf3d30a505129008496697692d48411e4e1e02b19a9 not found: ID does not exist" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454686 4902 scope.go:117] "RemoveContainer" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" Jan 21 17:24:55 crc kubenswrapper[4902]: E0121 17:24:55.454933 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea\": container with ID starting with 2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea not found: ID does not exist" containerID="2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea" Jan 21 17:24:55 crc kubenswrapper[4902]: I0121 17:24:55.454963 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea"} err="failed to get container status \"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea\": rpc error: code = NotFound desc = could not find container \"2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea\": container with ID starting with 2bf183c9fad0e08c87d322a8fb144d6c166cf69e7ccd0145fc6e2547cd5e21ea not found: ID does not exist" Jan 21 17:24:56 crc kubenswrapper[4902]: I0121 17:24:56.312315 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" path="/var/lib/kubelet/pods/0846ffa8-a7a2-47b2-b8dc-aa69b321fef2/volumes" Jan 21 17:25:17 crc kubenswrapper[4902]: I0121 17:25:17.770154 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:25:17 crc kubenswrapper[4902]: I0121 17:25:17.770945 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.774658 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:19 crc kubenswrapper[4902]: E0121 17:25:19.775561 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-content" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775576 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-content" Jan 21 17:25:19 crc kubenswrapper[4902]: E0121 17:25:19.775600 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-utilities" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775608 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="extract-utilities" Jan 21 17:25:19 crc kubenswrapper[4902]: E0121 17:25:19.775628 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775636 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.775865 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="0846ffa8-a7a2-47b2-b8dc-aa69b321fef2" containerName="registry-server" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.777769 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.789196 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.789365 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.789523 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.795238 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891234 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891293 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891334 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.891792 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.892730 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:19 crc kubenswrapper[4902]: I0121 17:25:19.914121 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"certified-operators-g8sp7\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:20 crc kubenswrapper[4902]: I0121 17:25:20.112504 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:20 crc kubenswrapper[4902]: I0121 17:25:20.666825 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:21 crc kubenswrapper[4902]: I0121 17:25:21.652924 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" exitCode=0 Jan 21 17:25:21 crc kubenswrapper[4902]: I0121 17:25:21.653378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966"} Jan 21 17:25:21 crc kubenswrapper[4902]: I0121 17:25:21.656546 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerStarted","Data":"767d280e7edd1011a36ba60c402bed1755ea287ad9f9239295ded8a1b49d6336"} Jan 21 17:25:22 crc kubenswrapper[4902]: I0121 17:25:22.673779 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerStarted","Data":"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2"} Jan 21 17:25:22 crc kubenswrapper[4902]: E0121 17:25:22.987331 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1fb18_dd95_405a_b744_92b02ac80b20.slice/crio-c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee1fb18_dd95_405a_b744_92b02ac80b20.slice/crio-conmon-c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 17:25:23 crc kubenswrapper[4902]: I0121 17:25:23.692615 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" exitCode=0 Jan 21 17:25:23 crc kubenswrapper[4902]: I0121 17:25:23.692682 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2"} Jan 21 17:25:25 crc kubenswrapper[4902]: I0121 17:25:25.716699 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerStarted","Data":"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae"} Jan 21 17:25:25 crc kubenswrapper[4902]: I0121 17:25:25.754146 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g8sp7" podStartSLOduration=4.283609005 podStartE2EDuration="6.754122604s" podCreationTimestamp="2026-01-21 17:25:19 +0000 UTC" firstStartedPulling="2026-01-21 17:25:21.655333597 +0000 UTC m=+10283.732166636" lastFinishedPulling="2026-01-21 17:25:24.125847206 +0000 UTC m=+10286.202680235" observedRunningTime="2026-01-21 17:25:25.740167241 +0000 UTC m=+10287.817000290" watchObservedRunningTime="2026-01-21 17:25:25.754122604 +0000 UTC m=+10287.830955653" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.113625 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.114270 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.208479 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.876182 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:30 crc kubenswrapper[4902]: I0121 17:25:30.955547 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:32 crc kubenswrapper[4902]: I0121 17:25:32.816909 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g8sp7" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" containerID="cri-o://de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" gracePeriod=2 Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.815680 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826783 4902 generic.go:334] "Generic (PLEG): container finished" podID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" exitCode=0 Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826839 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8sp7" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826849 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae"} Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826884 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8sp7" event={"ID":"5ee1fb18-dd95-405a-b744-92b02ac80b20","Type":"ContainerDied","Data":"767d280e7edd1011a36ba60c402bed1755ea287ad9f9239295ded8a1b49d6336"} Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.826903 4902 scope.go:117] "RemoveContainer" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.883521 4902 scope.go:117] "RemoveContainer" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.914189 4902 scope.go:117] "RemoveContainer" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.957140 4902 scope.go:117] "RemoveContainer" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" Jan 21 17:25:33 crc kubenswrapper[4902]: E0121 17:25:33.957657 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae\": container with ID starting with de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae not found: ID does not exist" containerID="de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.957692 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae"} err="failed to get container status \"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae\": rpc error: code = NotFound desc = could not find container \"de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae\": container with ID starting with de5b70b3224eb082a3c73aa11db0e462ada80232dd05ba460016e3fafa1f75ae not found: ID does not exist" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.957713 4902 scope.go:117] "RemoveContainer" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" Jan 21 17:25:33 crc kubenswrapper[4902]: E0121 17:25:33.958645 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2\": container with ID starting with c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2 not found: ID does not exist" containerID="c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.958668 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2"} err="failed to get container status \"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2\": rpc error: code = NotFound desc = could not find container \"c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2\": container with ID starting with c6c57129e4409f784710664c838337c74429d78610056b9cceb32fcc2f8751e2 not found: ID does not exist" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.958684 4902 scope.go:117] "RemoveContainer" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" Jan 21 17:25:33 crc kubenswrapper[4902]: E0121 17:25:33.959270 4902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966\": container with ID starting with 66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966 not found: ID does not exist" containerID="66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966" Jan 21 17:25:33 crc kubenswrapper[4902]: I0121 17:25:33.959293 4902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966"} err="failed to get container status \"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966\": rpc error: code = NotFound desc = could not find container \"66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966\": container with ID starting with 66ec7930a0f998b00a52bc887b982ecf3aeaa4dd5522ccf09006cee0c769d966 not found: ID does not exist" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.054895 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") pod \"5ee1fb18-dd95-405a-b744-92b02ac80b20\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.055034 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") pod \"5ee1fb18-dd95-405a-b744-92b02ac80b20\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.055145 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") pod \"5ee1fb18-dd95-405a-b744-92b02ac80b20\" (UID: \"5ee1fb18-dd95-405a-b744-92b02ac80b20\") " Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.055873 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities" (OuterVolumeSpecName: "utilities") pod "5ee1fb18-dd95-405a-b744-92b02ac80b20" (UID: "5ee1fb18-dd95-405a-b744-92b02ac80b20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.064322 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz" (OuterVolumeSpecName: "kube-api-access-4knpz") pod "5ee1fb18-dd95-405a-b744-92b02ac80b20" (UID: "5ee1fb18-dd95-405a-b744-92b02ac80b20"). InnerVolumeSpecName "kube-api-access-4knpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.103117 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ee1fb18-dd95-405a-b744-92b02ac80b20" (UID: "5ee1fb18-dd95-405a-b744-92b02ac80b20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.159588 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4knpz\" (UniqueName: \"kubernetes.io/projected/5ee1fb18-dd95-405a-b744-92b02ac80b20-kube-api-access-4knpz\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.159852 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.159944 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee1fb18-dd95-405a-b744-92b02ac80b20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.170336 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.185911 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g8sp7"] Jan 21 17:25:34 crc kubenswrapper[4902]: I0121 17:25:34.321303 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" path="/var/lib/kubelet/pods/5ee1fb18-dd95-405a-b744-92b02ac80b20/volumes" Jan 21 17:25:47 crc kubenswrapper[4902]: I0121 17:25:47.770331 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:25:47 crc kubenswrapper[4902]: I0121 17:25:47.771271 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.772133 4902 patch_prober.go:28] interesting pod/machine-config-daemon-m2bnb container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.773058 4902 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.773122 4902 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.775122 4902 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e"} pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:26:17 crc kubenswrapper[4902]: I0121 17:26:17.775254 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerName="machine-config-daemon" containerID="cri-o://416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" gracePeriod=600 Jan 21 17:26:17 crc kubenswrapper[4902]: E0121 17:26:17.942412 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.302917 4902 generic.go:334] "Generic (PLEG): container finished" podID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" exitCode=0 Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.311533 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerDied","Data":"416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e"} Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.311816 4902 scope.go:117] "RemoveContainer" containerID="5e6bee27c568351479893fd8644172bc1970f833c3f9b00f5a27074b919cd4b1" Jan 21 17:26:18 crc kubenswrapper[4902]: I0121 17:26:18.312865 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:18 crc kubenswrapper[4902]: E0121 17:26:18.313233 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.777608 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:22 crc kubenswrapper[4902]: E0121 17:26:22.779325 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.779339 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" Jan 21 17:26:22 crc kubenswrapper[4902]: E0121 17:26:22.779360 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-utilities" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.779366 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-utilities" Jan 21 17:26:22 crc kubenswrapper[4902]: E0121 17:26:22.779374 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-content" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.779380 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="extract-content" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.789653 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee1fb18-dd95-405a-b744-92b02ac80b20" containerName="registry-server" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.791381 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.809398 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.809536 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.809570 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.818163 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911310 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911373 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911476 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.911886 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.912268 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:22 crc kubenswrapper[4902]: I0121 17:26:22.935869 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"redhat-operators-tfztn\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:23 crc kubenswrapper[4902]: I0121 17:26:23.121111 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:23 crc kubenswrapper[4902]: I0121 17:26:23.598659 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.384032 4902 generic.go:334] "Generic (PLEG): container finished" podID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerID="232ab40424e3be90e3da29f1cecb02852c58216654f49b221218b2c376a531d9" exitCode=0 Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.384378 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"232ab40424e3be90e3da29f1cecb02852c58216654f49b221218b2c376a531d9"} Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.384408 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerStarted","Data":"47fb3fc789982b0726a9eaca33d56d48b53a5ffbb3e37b220abd7de140c09302"} Jan 21 17:26:24 crc kubenswrapper[4902]: I0121 17:26:24.388881 4902 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:26:26 crc kubenswrapper[4902]: I0121 17:26:26.406571 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerStarted","Data":"75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83"} Jan 21 17:26:27 crc kubenswrapper[4902]: I0121 17:26:27.417426 4902 generic.go:334] "Generic (PLEG): container finished" podID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerID="75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83" exitCode=0 Jan 21 17:26:27 crc kubenswrapper[4902]: I0121 17:26:27.417589 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"75b14825d04b1497405658d6882c82cf75e25621b33fb88924b442ab680cdf83"} Jan 21 17:26:30 crc kubenswrapper[4902]: I0121 17:26:30.296011 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:30 crc kubenswrapper[4902]: E0121 17:26:30.296914 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:31 crc kubenswrapper[4902]: I0121 17:26:31.463269 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerStarted","Data":"fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa"} Jan 21 17:26:33 crc kubenswrapper[4902]: I0121 17:26:33.121468 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:33 crc kubenswrapper[4902]: I0121 17:26:33.121862 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:34 crc kubenswrapper[4902]: I0121 17:26:34.182886 4902 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfztn" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" probeResult="failure" output=< Jan 21 17:26:34 crc kubenswrapper[4902]: timeout: failed to connect service ":50051" within 1s Jan 21 17:26:34 crc kubenswrapper[4902]: > Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.208862 4902 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.243821 4902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tfztn" podStartSLOduration=17.210243413 podStartE2EDuration="21.243797431s" podCreationTimestamp="2026-01-21 17:26:22 +0000 UTC" firstStartedPulling="2026-01-21 17:26:24.387231262 +0000 UTC m=+10346.464064291" lastFinishedPulling="2026-01-21 17:26:28.42078529 +0000 UTC m=+10350.497618309" observedRunningTime="2026-01-21 17:26:31.488456131 +0000 UTC m=+10353.565289160" watchObservedRunningTime="2026-01-21 17:26:43.243797431 +0000 UTC m=+10365.320630500" Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.284253 4902 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:43 crc kubenswrapper[4902]: I0121 17:26:43.458742 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:44 crc kubenswrapper[4902]: I0121 17:26:44.690311 4902 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tfztn" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" containerID="cri-o://fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa" gracePeriod=2 Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.297708 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:45 crc kubenswrapper[4902]: E0121 17:26:45.298537 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700577 4902 generic.go:334] "Generic (PLEG): container finished" podID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerID="fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa" exitCode=0 Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700647 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"fb7e180575d42000763d72a5e12d5b9cefd04449586a7b77c1636ec477704dfa"} Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700681 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfztn" event={"ID":"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a","Type":"ContainerDied","Data":"47fb3fc789982b0726a9eaca33d56d48b53a5ffbb3e37b220abd7de140c09302"} Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.700695 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fb3fc789982b0726a9eaca33d56d48b53a5ffbb3e37b220abd7de140c09302" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.753581 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.778937 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") pod \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.779270 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") pod \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.779325 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") pod \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\" (UID: \"8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a\") " Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.780382 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities" (OuterVolumeSpecName: "utilities") pod "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" (UID: "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.788005 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj" (OuterVolumeSpecName: "kube-api-access-485pj") pod "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" (UID: "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a"). InnerVolumeSpecName "kube-api-access-485pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.881221 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485pj\" (UniqueName: \"kubernetes.io/projected/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-kube-api-access-485pj\") on node \"crc\" DevicePath \"\"" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.881254 4902 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.907142 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" (UID: "8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:26:45 crc kubenswrapper[4902]: I0121 17:26:45.983586 4902 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:26:46 crc kubenswrapper[4902]: I0121 17:26:46.716559 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfztn" Jan 21 17:26:46 crc kubenswrapper[4902]: I0121 17:26:46.746716 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:46 crc kubenswrapper[4902]: I0121 17:26:46.758307 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tfztn"] Jan 21 17:26:48 crc kubenswrapper[4902]: I0121 17:26:48.354967 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" path="/var/lib/kubelet/pods/8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a/volumes" Jan 21 17:26:59 crc kubenswrapper[4902]: I0121 17:26:59.295433 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:26:59 crc kubenswrapper[4902]: E0121 17:26:59.296215 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:12 crc kubenswrapper[4902]: I0121 17:27:12.300948 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:12 crc kubenswrapper[4902]: E0121 17:27:12.301693 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:25 crc kubenswrapper[4902]: I0121 17:27:25.295028 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:25 crc kubenswrapper[4902]: E0121 17:27:25.298690 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:39 crc kubenswrapper[4902]: I0121 17:27:39.295279 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:39 crc kubenswrapper[4902]: E0121 17:27:39.295876 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:27:50 crc kubenswrapper[4902]: I0121 17:27:50.299181 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:27:50 crc kubenswrapper[4902]: E0121 17:27:50.300165 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:02 crc kubenswrapper[4902]: I0121 17:28:02.295513 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:02 crc kubenswrapper[4902]: E0121 17:28:02.296377 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:13 crc kubenswrapper[4902]: I0121 17:28:13.295438 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:13 crc kubenswrapper[4902]: E0121 17:28:13.296221 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:28 crc kubenswrapper[4902]: I0121 17:28:28.302573 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:28 crc kubenswrapper[4902]: E0121 17:28:28.303464 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:41 crc kubenswrapper[4902]: I0121 17:28:41.295420 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:41 crc kubenswrapper[4902]: E0121 17:28:41.296308 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:28:55 crc kubenswrapper[4902]: I0121 17:28:55.296231 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:28:55 crc kubenswrapper[4902]: E0121 17:28:55.297337 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:08 crc kubenswrapper[4902]: I0121 17:29:08.311347 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:08 crc kubenswrapper[4902]: E0121 17:29:08.311997 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:20 crc kubenswrapper[4902]: I0121 17:29:20.296187 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:20 crc kubenswrapper[4902]: E0121 17:29:20.297263 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:31 crc kubenswrapper[4902]: I0121 17:29:31.295575 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:31 crc kubenswrapper[4902]: E0121 17:29:31.296713 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:45 crc kubenswrapper[4902]: I0121 17:29:45.295271 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:45 crc kubenswrapper[4902]: E0121 17:29:45.295993 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:29:57 crc kubenswrapper[4902]: I0121 17:29:57.294568 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:29:57 crc kubenswrapper[4902]: E0121 17:29:57.295266 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.200763 4902 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29"] Jan 21 17:30:00 crc kubenswrapper[4902]: E0121 17:30:00.203144 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-utilities" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.203285 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-utilities" Jan 21 17:30:00 crc kubenswrapper[4902]: E0121 17:30:00.203430 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.203520 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" Jan 21 17:30:00 crc kubenswrapper[4902]: E0121 17:30:00.203624 4902 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-content" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.203713 4902 state_mem.go:107] "Deleted CPUSet assignment" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="extract-content" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.204111 4902 memory_manager.go:354] "RemoveStaleState removing state" podUID="8600ef99-2c9f-4aa9-8833-d0ef0ae5df7a" containerName="registry-server" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.205529 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.208674 4902 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.208995 4902 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.243488 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29"] Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.341787 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.341943 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.342030 4902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.443768 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.444617 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.444801 4902 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.446098 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.468412 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.472343 4902 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"collect-profiles-29483610-xdc29\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.536127 4902 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:00 crc kubenswrapper[4902]: I0121 17:30:00.996746 4902 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29"] Jan 21 17:30:01 crc kubenswrapper[4902]: I0121 17:30:01.861544 4902 generic.go:334] "Generic (PLEG): container finished" podID="3f3d7108-09c7-4727-8d9a-41c107bf4a09" containerID="edc7eebdd8f8e0aabf90c74f44ac0c18bff134d103535bdb8854d2f5b6b7f0e0" exitCode=0 Jan 21 17:30:01 crc kubenswrapper[4902]: I0121 17:30:01.861632 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" event={"ID":"3f3d7108-09c7-4727-8d9a-41c107bf4a09","Type":"ContainerDied","Data":"edc7eebdd8f8e0aabf90c74f44ac0c18bff134d103535bdb8854d2f5b6b7f0e0"} Jan 21 17:30:01 crc kubenswrapper[4902]: I0121 17:30:01.861944 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" event={"ID":"3f3d7108-09c7-4727-8d9a-41c107bf4a09","Type":"ContainerStarted","Data":"3e4a8f4eeb283b9193d68e292218b38782d8c949d775f5d609078ac16ba2bf16"} Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.319323 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414068 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") pod \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414369 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") pod \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414468 4902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") pod \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\" (UID: \"3f3d7108-09c7-4727-8d9a-41c107bf4a09\") " Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.414917 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f3d7108-09c7-4727-8d9a-41c107bf4a09" (UID: "3f3d7108-09c7-4727-8d9a-41c107bf4a09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.415462 4902 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f3d7108-09c7-4727-8d9a-41c107bf4a09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.420877 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f3d7108-09c7-4727-8d9a-41c107bf4a09" (UID: "3f3d7108-09c7-4727-8d9a-41c107bf4a09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.420904 4902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv" (OuterVolumeSpecName: "kube-api-access-vbbdv") pod "3f3d7108-09c7-4727-8d9a-41c107bf4a09" (UID: "3f3d7108-09c7-4727-8d9a-41c107bf4a09"). InnerVolumeSpecName "kube-api-access-vbbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.518471 4902 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f3d7108-09c7-4727-8d9a-41c107bf4a09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.518514 4902 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbbdv\" (UniqueName: \"kubernetes.io/projected/3f3d7108-09c7-4727-8d9a-41c107bf4a09-kube-api-access-vbbdv\") on node \"crc\" DevicePath \"\"" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.882355 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" event={"ID":"3f3d7108-09c7-4727-8d9a-41c107bf4a09","Type":"ContainerDied","Data":"3e4a8f4eeb283b9193d68e292218b38782d8c949d775f5d609078ac16ba2bf16"} Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.882425 4902 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4a8f4eeb283b9193d68e292218b38782d8c949d775f5d609078ac16ba2bf16" Jan 21 17:30:03 crc kubenswrapper[4902]: I0121 17:30:03.882771 4902 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483610-xdc29" Jan 21 17:30:04 crc kubenswrapper[4902]: I0121 17:30:04.404993 4902 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 17:30:04 crc kubenswrapper[4902]: I0121 17:30:04.414043 4902 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-7hzv8"] Jan 21 17:30:06 crc kubenswrapper[4902]: I0121 17:30:06.313379 4902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504c5756-9427-4037-be3a-481fc1e8715f" path="/var/lib/kubelet/pods/504c5756-9427-4037-be3a-481fc1e8715f/volumes" Jan 21 17:30:12 crc kubenswrapper[4902]: I0121 17:30:12.294775 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:12 crc kubenswrapper[4902]: E0121 17:30:12.295463 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:26 crc kubenswrapper[4902]: I0121 17:30:26.294813 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:26 crc kubenswrapper[4902]: E0121 17:30:26.295691 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:37 crc kubenswrapper[4902]: I0121 17:30:37.830408 4902 scope.go:117] "RemoveContainer" containerID="aa3c7bb404afe310e56cb2617f84d467c8f578e09af1f3e30d342fd88646315e" Jan 21 17:30:38 crc kubenswrapper[4902]: I0121 17:30:38.306840 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:38 crc kubenswrapper[4902]: E0121 17:30:38.307532 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:30:50 crc kubenswrapper[4902]: I0121 17:30:50.296344 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:30:50 crc kubenswrapper[4902]: E0121 17:30:50.297529 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:31:02 crc kubenswrapper[4902]: I0121 17:31:02.305290 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:31:02 crc kubenswrapper[4902]: E0121 17:31:02.307985 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:31:17 crc kubenswrapper[4902]: I0121 17:31:17.296127 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:31:17 crc kubenswrapper[4902]: E0121 17:31:17.297373 4902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m2bnb_openshift-machine-config-operator(d6c85cc7-ee09-4640-ab22-ce79d086ad7a)\"" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" podUID="d6c85cc7-ee09-4640-ab22-ce79d086ad7a" Jan 21 17:31:22 crc kubenswrapper[4902]: E0121 17:31:22.521675 4902 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/systemd-hostnamed.service\": RecentStats: unable to find data in memory cache]" Jan 21 17:31:28 crc kubenswrapper[4902]: I0121 17:31:28.309769 4902 scope.go:117] "RemoveContainer" containerID="416fede3b213667735f4e0a864821f94758c4a256489298363d784202353892e" Jan 21 17:31:29 crc kubenswrapper[4902]: I0121 17:31:29.003229 4902 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m2bnb" event={"ID":"d6c85cc7-ee09-4640-ab22-ce79d086ad7a","Type":"ContainerStarted","Data":"990688b0a47233a469fcab728e8366f3b80f69b4ece9e59cbc348d43edad0605"}